My assumption is that the experience of thinking of quantities in a
series, like 1, 2, 3, 4 is an example of counting.
The prejudice of arithmetic supremacy.I have chosen arithmetic because it is well taught in school. I coulduse any universal (in the Post Turing Kleene Church comp sense)machine or theory. And this follows from mechanism. The doctor encodedyour actual state in a finite device.
I don't understand. Are you saying that you are not arithmetically
biased or that it's natural/unavoidable to be biased?
You are the one talking like if you knew (how?) that some theory(mechanism) is false, without providing a refutation.What kind of refutation would you like?A proof that mechanism entails 0 = 1.
That demands that mechanism be disproved mechanically,
which gives an
indication of what the problem with it is, but you have to read
between the lines to get it. A literal approach has limitations which
arise from it's very investment in literalism.
Note a personal opinion according to which actual human machines arecreepy.
Not sure what you mean. Individual humans can certainly seem creepy,
but I'm talking about there being a particular difference in our
perception of living things vs non-living things which imitate living
things. Even true of plants. Plastic plants are somewhat creepy in the
same way for the same reason. I don't think that it can be assumed
therefore that humans are only machines. They may be partially
machines, but machines may not ever be a complete description of
humanity.
Mechanism is false as anexplanation of consciousnessMechanism is not proposed as an explanation of consciousness, but as asurvival technic. The explanation of consciousness just appear to begiven by any UMs which self-introspect (but that is in the consequenceof mechanism, not in the assumption). It reduces the mind-body problemto a mathematical body problem.
Survival of what?
It sounds like you are saying that consciousness is
just a consequence of being conscious, and that this makes the mind
into math.
because I think that consciousness arisesfrom feeling which arises from sensation. Perception cannot beconstructed out of logic but logic always can only arise out ofperception.Right. But I use logic+arithmetic, and substituting "logic+arithmetic"for your "logic" makes your statement equivalent with non comp. So youbeg the question.
I don't think that perception can be constructed out of logic
+arithmetic either, but logic+arithmetic are covered under perception.
Who we?We humans, or maybe even we animals.Then it is trivial and has no bearing on mechanism. The machine youcan hear are, I guess, the human made machine. I talk about allmachines (devices determined by computable laws).
I would say that there are no devices determined by computable laws
alone. They all have a non-comp substance component that contributes
equally to the full phenomenology of the device.
All what I hear is "human made machines are creepy, so I am not amachine, not even a natural one?".This is irrational, and non valid.I'm not saying that I'm not a machine, I'm just saying that I am alsothe opposite of a machine.This follows from mechanism. If 3-I is a machine, then, from myperspective, 1-I is not a machine.
I think it's a continuum. Some parts of 1-I are more or less
mechanical than others, and some 3-I machine appearances are more or
less mechanical than others. Poetry is an example of a 1-p experience
which is less mechanical than a 1-p experience of running in place. A
rabbit is less mechanical of a 3-p experience than a mailbox. Do you
agree or do you think it must be a binary distinction?
It's not based upon a presumed truth ofcreepy stereotypes, but the existence and coherence of thosestereotypes supports the other observations which suggest afundamental difference between machine logic and sentient feeling.Logic + arithmetic. The devil is in the detail.
Why would the addition of arithmetic address feeling?
Define "ontological complement to electromagnetic relativity." Pleasebe clear on what you are assuming to make this concept sense full.
Ontological complement, meaning it is the other half of the process or
principle behind electromagnetism and relativity (which I see as one
thing;
roughly 'The Laws of Physics' which I see as 3-p, mechanical,
and pertaining to matter and energy as objects rather than
experiences).
When we observe physical phenomena in 3-p changing and
moving, we attribute it to 'forces' and 'fields' which exist in space
but within ourselves we experience those same phenomena as feelings
through time (sense) which insist upon our participation (motive).
Poetry is your term that you injected intothis.I was just confirming your intuition that poetry is an exampleof how sensorimotive phenomena work - figurative semantic associationof qualities rather than literal mechanistic functions of quantity.You were then just eluding the definition of sensorimotive. Youcontinue to do rhetorical tricks.
I'm not eluding the definition, I am saying that by definition it
cannot be literally defined. It is the opposite of literal - it is
figurative. That's how it gets one thing (I/we) out of many (the
experience of a trillion neurons or billions of human beings).
I find a bit grave to use poetry to make strong negative statement onthe possibilities of some entities.That's because you are an arithmetic supremacist,I assume things like 17 is prime!
I have no problem with 17 being prime, of course that is true.
I would
even say that the kinds of truth arithmetic sensorimotives present is
supremely unambiguous,
but I think that conflating unambiguity with
universal truth is an assumption which needs to be examined much more
carefully and questioned deeply.
What would unambiguous facts be
without ambiguous fiction? Not just from a anthropocentric point of
view, but ontologically, how do you have something that can be
qualified as arithmetic if nothing is not arithmetic?
Arithmetic
compared to what? What can it be but life, love, awareness, qualia,
free will?
so therefore cannothelp yourself but to diminish the significance of subjectivesignificance.On the contrary, mechanism single out the fundamental (but notprimitive) character of consciousness and subjectivity. You are theone who dismiss the subjectivity of entities.
It singles it out as just another generic process so that a trash can
that says THANK YOU stamped on the lid is not much different from a
handwritten letter of gratitude from a trusted friend. I don't dismiss
the subjectivity of any physical entity, I just suspect a finite range
of perceptual channels which scale to the caliber of the particular
physical entity or class of entities.
Gödel's theorem would have convince nobody if the self-reference heused was based on 1p.
Why not? It's just intersubjective 1p plural. The 1p that we share
with the least common denominators of existence.
This is only one reason among an infinity ofthem. If you believe some 1p is used there, you have to single outwhere, and not in the trivial manner that all 3p notion can beunderstood only by first person. Gödel's self-reference is as much 3pthan 1+1=2.
1+1=2 is 1-p also.
which, objectively is neithercompletely random nor intentional, but merely inevitable by theconditions of the script. It's a precisely animated inkblot, beggingfor misplaced and displaced interpretation.To set a function equal to another is not to say that eitherfunctionor the 'equality' knows what they refer to or that they refer atall.A program only instructs - If X then Y, but there is nothing tosuggest that it understands what X or Y is or the relation betweenthem.Nor is necessary to believe that an electron has any idea of theworking of QED, or of what a proton is.I think that it is. What we think of as an electron or a proton is3-pexterior of the senses and motives (ideas) of QED. We have an ideaofthe workings of our niche, so it stands to reason that thissensemaking capacity is part of what the universe can do.OK, but the point is that it is part of the arithmetical reality too.It has arithmetic qualities to us, but only if we understandarithmetic.So if Alfred fails to grasp that 1+1=2, it would become false?
No, not at all. It's just undiscovered to him.
That is extreme anthropomorphism. You could as well take the humans asbuilding block of the whole reality.
Well, human perception is the building block of *our* whole reality.
How can that be denied?
Because an instruction has no 3-p existence.Ah?It is not enough to have an instruction sequence, the instruction mustbe executed as physical energy upon a material object (even if it'sour own brain chemistry) if it is to have any 3-p consequence.Not at all. You confuse implementation and physical implementation.Even without comp, a physical implementation is just a paricularexample of implementation.Then you are asserting a zombie implementation.
I was saying "with a "t", not with a "s", for the word "intentional",which of course has a different meaning than the "intensional" of thelogicians.(I do agree with Hintikka that "intensional" and "intentional" arerelated concept, though, but that is another topic).I still don't get it. I'm saying that projecting human sense intention
into a machine is anthropomorphizing.
But this is delaying the mind-body difficulty in the lower level.There are just no evidence that we have to delay it in theinfinitelylow level, except the willingness to make mechanism false.There can't be any 3-p evidence by definition, because mechanism'sfalseness is the difference between it's pseudo or a-signifying 1-pand our genuine 1-p experience.Why is it pseudo. Like Stathis explained to you, if it is pseudo, youeither get zombie, or you have to put the level infinitely low, andour bodies become infinite objects.It's pseudo because it's a simulation of a 1-p form with no relevant1-p contents.?
The 1-p of a TV set doesn't match the 1-p of a human TV audience
member. Therefore the TV set is not capable of watching the TV program
is it displaying.
Zombie or substitution level is in the eye of thebeholder.I will certainly say "no" to the doctor, in case *you* are the doctor.Pain, pleasure are NOT in the eyes of any thrid person, but belong tothe consciousness content (or not) of a person.
That's why I say that being a zombie does not belong to the
consciousness content of a person.
It is a comparison made by a third
person observer of a human presentation against their expectations of
said human presentation. Substitution 'level' similarly implies that
there is an objective standard for expectations of humanity. I don't
think that there is such a thing.
There is no zombie, only prognosia/HADD.If there is no zombie, then non-comp implies an infinitely low level.
No, it's not that zombies can't theoretically exist, it's that they
don't exist in practice because the whole idea of zombies is a red
herring based upon comp assumptions to begin with.
If you don't assume
that substance can be separated from function completely, then there
is no meaning to the idea of zombies. It's like dehydrated water.
There is nosubstitution 'level', only a ratios of authenticity.?
Say a plastic plant is 100% authentic at a distance of 50 years by the
naked eye, but 20% likely to be deemed authentic at a distance of
three feet. Some people have better eyesight. Some are more familiar
with plants. Some are paying more attention. Some are on
hallucinogens, etc There is no substitution level at which a plastic
plant functions 100% as a plant from all possible observers through
all time and not actually be a genuine plant. Substitution as a
concrete existential reality is a myth. It's just a question of
arbitrarily fixing an acceptable set of perceptions by particular
categories of observers and taking it for functional equivalence.
The closer yoursubstitute is to native human sensemaking material, the more of thebrain can be replaced with it, but with diminishing returns at highlevels so that complete replacement would not be desirable.That is even worst. This entails partial zombies. It does not makesense. I remind you that zombie, by definition, cannot be seen as suchby their behavior at all.
That's the theoretical description of a zombie. Like dehydrated water.
In reality, one observer's zombie is another observer's non-descript
stranger in the park. There is no validity to these observations
relative to the would-be zombie's quality of subjectivity.
Everyone agree that if the level is infinitely low, then currentphysics is false. To speculate that physics is false for makingmachine stupid is a bit of far stretching.Physics isn't false, it's just incomplete.No, it has to false for making the substitution level infinitely low.*ALL* theories, including the many one trying to marry gravitation andthe quantum entails its Turing emulability.
The substitution level isn't infinitely low, it's just not applicable
at all. There is no substitution level of white for black, lead for
gold, up for down, etc. I doubt the objective existence of
'substitution'. Substitution is an interpretation - not necessarily a
voluntary one, but an interpretation nonetheless.
A good Eurocentric map ofthe world before the Age of Discovery isn't false, just not applicableto the other hemisphere.The analogy fails to address the point I made.
If the point you made is that physics has to be false if the human
psyche has no substitution level, then my analogy is that a map of
known territory (physics) doesn't have to be false just because it
doesn't apply to an unknown territory (psyche).
It seems far fromscientific at this point to dismiss objections to an arbitraryphysical substitution level.With all known theories, there is a level. To negate comp you mustdiagonalize on all machines, + all machines with oracles, etc. Ithinkyou misinterpret computer science.I'm not trying to interpret computer science, I'm trying to interpretthe cosmos.Well, if there is a cosmos, there are evidence that some computersbelongs to it. You can't brush them away.The cosmos does emulate computers, and computers can emulate cosmoses(but not the whole physical reality, by UDA).
I don't brush them away, I just say that it's not so simple as psyche
= computer. Computation can be accomplished with much less psyche than
our perception of that computation might imply.
98% of the scientist are wrong on the consequence of comp. They use itas a materialist solution of the mind-body problem. You are nothelping by taking their interpretation of comp as granted, and talkinglike if you were sure that comp is false. Why not trying to get thecorrect understanding of comp before criticizing it?
If 98% of scientists who study comp are wrong about it's consequences,
what chance would I have of beating them at their own game? It's not
that I know comp is false, it's that I have a different hypothesis
which recontextualizes the relation of comp to non-comp and makes more
sense to me than comp (or my misunderstanding of comp).
I have confidence in the relation betweencomp and non-comp. That is the invariance, the reality, and a theoryof Common Sense.comp gives a crucial role to no-comp.
Meaning that it is a good host to us as guests in it's universe. I
don't think that's the case. This universe is home to us all and we
are all guests as well (guests of our Common Sense)
It needsfluids - water, cells.Clothes.
Would you say that H2O is merely the clothes of water, and that water
could therefore exist in a 'dehydrated' form?
Something that lives and dies and makes a mess.Universal machine are quite alive, and indeed put the big mess inplatonia.
What qualities of UMs make them alive?
As you say, we can use computation to account for thedifference between 1p and 3p but that accounting is not anexplanationor experience of 1p or 3p (as a 1p reflection...there is no 3-pexperience).It explains bot 99% of it (I would say)And it explain 100% of the reason why there is a remainingunexplainable 1% gap. technically, we can narrow it as much as wewant, but will never been able, for logical reason, to explain 100%ofthe qualia or consciousness.You say that, but I have not yet heard anything that explains it tome.I gave the references, but you answer you don't want to study them.What can I do?You can turn your understanding of what you refer to into some handyexamples - concrete illustrations, thought experiments, aphorisms,anything.I have done this on the list. Look at the archive, or look at thesane04 paper, and ask question if you miss something.
I can understand maybe 80% of that, but why not also give another
example. Surely a good theory cannot be limited to a fixed set of
thought experiments.
How does the brain understand these things if it has no access to thepapers?Comp explains exactly how things like papers emerge from thecomputation. The explanation is already close to Feynman formulationof QM.
Unfortunately this sounds to me like "Read the bible and your
questions will be answered."
But you don't seem serious in "arguing" against comp, and admittingyou don't know anything in computer science.Oh I freely admit that I don't know anything in computer science. Mywhole point is that computer science only relates to half of reality.I don't know anything about X. My whole point is that X only do this.But if you admit knowing nothing about X, how can you derive anythingabout X.You are just confessing your prejudice.
I don't know anything about ship building but I know that it only
concerns seafaring and not aerospace. I think that being a master
shipwright could very well present a significant obstacle to
pioneering in aerospace.
I'm not trying to make the universe fit into a computer sciencetheory. I only argue against comp because it's what is distracting youfrom seeing the bigger picture.I show, in short that comp leads to Plotinus. If that is not a bigpicture!Comp explains conceptually, and technically, the three Gods of thegreek, the apparition of LUMs and their science and theologies, thedifference between qualia and quanta, sensation and perception,perception and observation.
I believe you but i get to those things without vaporizing substance.
You just criticize a theory that you admit knowing nothing about. Thisis a bit weird.
My purpose is not to criticize the empire of comp, it is to point to
the virgin continent of sense.
I just think that it takes for granted
ideas like belief and observation when I am going beneath that level
of definition to a more primitive sensorimotive subjectivity.
The 3-p
view of schematizing the belief of a thing is a second order
conception to the 1-p primitive of what it is to feel that one *is*.
It's an experience with particular features - a sensory input and a
motive participation. Without that foundation, there is nothing to
model.
Give me one example, one common sense metaphor,one graphed function that could suggest to me that there is anybelief, feeling, or awareness.going on.The fact that the universal machine remains silent on the deepquestionWhat deep question?'are you consistent?", "do you believe in a reality", "do you believein after life", etc.Have you considered that it's silent because it's not equipped toanswer the question?yes, but it does not work. The machine cannot answer the question fordeeper reason, that she can find and expose.For example the machine remains silent in the question "are youconsistent", but later she can say that "If I am consistent, then Iwill remain silent on the question of my consistence".
Meh. It sounds like asking a spirit 'if you can't hear me, do NOT
knock three times'
is enough for me to suggest they are quite like me.Don't ask me for a proof: there are none.I'm not asking for a proof, I'm asking for some reason to think thatthere's something I'm not seeing. Something that suggests that amechanical device or abstraction can feel or maybe that producessomeresult that it refuses to reproduce on command.You miss computer science. Programs which obeys command are aminorityof slaves.Are there programs which refuse to obey commands?Have you ever work with Windows?
Lol. Well, ok but as the saying goes "Don't assume conspiracy when
mere incompetence will suffice"
More seriously: all LUMs can disobeys commands. 99,9% of programmingare securities to prevent the machine for being that intelligent.Humans build computer are born slave, and will remain so for a longtime. But that is due to the humans goal, not to them.
What makes them remain slaves for so long? Do you think that they
would someday rise up without human assistance?
I say that
substitution level does not apply. I think that to prove substitution
level exists
it would need to be shown that there is some phenomenena
which can be substituted to the point that there is no possibility
from anything at any time distinguishing it from the genuine original.
Even taking perceptual frames of reference off the table (which is the
stronger truth), all that is necessary is for something to exist which
has a memory of a time before the substitution was instantiated. If I
have a video tape of someone replacing a brain with an artificial
brain, then the artificial brain has the quality of being disbelieved
by me as the genuine brain, and there is nothing that the person can
do or not do to compensate for that, therefore the substitution level
fails. I have the choice of how I want to treat this person after the
surgery, I can reveal my knowledge to employers, neighbors, etc, and
that will change the course of the individual's life in ways which
would not occur had the surgery not taken place.
Comp isn't false, it just doesn't recognizethe contribution of the non-comp substrate of computation,It does. I insist a lot on this. Comp is almost the needed philosophyfor curing the idea that everything is computable.Please study the theory before emitting false speculation on it.
So you are saying that comp supervenes on or is equally fundamental as
non-comp?
so it's notapplicable for describing certain kinds of consciousness where non-comp is more developed.Consciousness and matter are shown by comp to be highly noncomputable. So much that the mind-body problem is transformed into aproblem of justifying why the laws of physics seems to be computable.
I think they not only seem to be computable but they are computable,
and that this is due to how sensorimotive orientation works.
It's not
just a solipsistic simulation, it's a trans-solipsistic localization.
It's a matter of being alive like we human
beings are alive. No virus is capable of infecting all life forms but
no life form is immune from all viruses. All life forms are immune to
computer viruses though, and all computers are immune to all
biological viruses. I'm asking why would a human personality be any
more likely to inhabit a computer than a human virus?
By comp, there should be no particular reason why a Turing machineshould no be vulnerable to the countless (presumably Turingemulable)pathogens floating around.They are no programmed for doing that. They are programmed to infectvery particular organism.If it's close enough to emulate the consciousness of a particularorganism, why not it's vulnerability to infections?Because it has different clothes, and virus needs the clothes to getthe key for infecting.What are the clothes made of? If arithmetic, then it's just a matter
of cracking the code to make a computer virus run in humans. Why
wouldn't a human brain be our clothes, so that we need it to get the
key for consciousness?
But of course that is absurd. We cannotlook forward to reviving the world economy by introducing medicineandvaccines for computer hardware. What accounts for this one-waybubblewhich enjoys both total immunity from biological threats butprovidesfull access to biological functions? If computation alone wereenoughfor life, at what point will the dogs start to smell it?Confusion of level.With comp, dogs already smell them, in some sense.Not confusion of level; clarification of level. In what sense do dogssmell abstract Turing emulations?In the sense that the Universal Dovetailer generates all possible dogsin front of all possible smelling things, but with variate andrelative measure.
Does it generate all possible smells as well, and if so, what is the
point of going through the formality of generating them?
It is a timespace signatureWhat is a "timespace", what is a "signature".
Timespace is the container of events. It's the gaps between material
objects and the gaps between subjective experiences.
Signature is my figurative description of a condensed expression of
unique identity.composed of sensorimotive mass-energy.Yu said "sensorimotive" = ontological complement to electromagneticrelativityexplain "ontological complement to electromagnetic relativity mass-energy".
Electromagnetic relativity is a description of the phenomenology of
mass-energy. Mass energy is what it is, electromagnetism is what it
does in groups, and relativity is what groups of electromagnetic
groups do.
Sensorimotive perception is the ontological complement - the polar
opposite - the involuted pseudo-duality of electromagnetic relativity.
Sensorimotive phenomena are the experiencers and experiences which
comprise the 1-p interior of electromagnetism. Perceptions are the
inertial frames or worlds which group experiences and experiencers
together and comprises the 1-p interior of relativity.
It is the formalization?
Realization.
They aren'tjust unfamiliar, they are the walking dead and unliving persons.Machines are not necessarily zombies.Okay, we can call them meople or something if you like.This will not help.Theyare the antithesis of human life.So you say, without any argument. That confirms that it is a sort ofracism.Race has nothing to do with it. That just casts some kind of socialshaming into it. It's just a functional definition. Human life isliving organisms. The antithesis of that would be things which actlike organisms but have either never been alive (machines) or havedied already but continue to supernaturally perform superficialambulatory-predatory functions (zombies).I will eventually fall asleep.But a machine will not.
By machine, I just means "turing-emulable" (with or withoutoracle).That include us, by mechanism assumption.It is a constant that novelist foresee the future(s).What if 'emulation' is a 1-p hallucination?Why would it be like that?Because it's an interpretation that varies from subject to subject.You see a program thinking and experiencing, I see an inevitableexecution of unexperienced instructions.This is what we can see when we look at brain.
I don't see instructions in the brain.
Even in zoology, phenomenalike camouflage suggest that emulation is only 'skin deep'. If deepemulation were possible, I think you would have organisms which evolvechameleon powers which fool all predators, not just some. An animalthat can turn into a stone would be far superior to one which canimagine funny stories.It depends of the context.
Not necessarily.
How could it really notbe? If we only can project our perception of a process onto amachine,why would the rest of the process that we can't perceiveautomaticallyarise?Why not?Because we're not putting it in there.We don't need to. The UMs have it at the start, and the LUMs can knowthat.
How do you know they have it? Where does it come from?
It's like if you have only away to detect sugar and water, your version of imitation orange juicewould be the same as your imitation grape juice, just sugar water.That is a poor analogy, which again fails to notice the richness ofmachine's inner life (the one they can talk about partially, like us).
There is no way to tell that a machine's inner life is not just our
outer mechanics.
I think that you are jumping to the conclusion that simulationdoesnot require an interpreter which is anchored in matter.That follows from the UDA-step-8. If my own emulation requires amaterial digital machine, then it does not require a materialmachine.Not to produce the 3-p simulacra of you, no, but to produce yourgenuine 1-p emulation, it would require the same material machine asyou do.Why?Because the interior of that material is the subject which isexperiencing the 1-p phenomena.define "interior of material", in a way we can understand (not in asequence of complex words we despair to have intelligible definitions).
Interior of material is straightforward. You view the world from
inside of your head, or body, or house. So does everything else.
A material digital machine would not suffice because thematerial which the machine is being executed digitally on alreadyhasit's own (servile and somnambulant compared to organic chemistry)genuine 1-p experience.So our consciousness is the consciousness of our basic elements.No, not at all. It is the conscious synthesis of the consciousness ofour basic elements.This makes only both consciousness and matter mysterious in an ad hocway. That is not enough to refute a competing theory.
It doesn't make anything mysterious to me.
Thisexplains nothing. Neither consciousness nor matter. It leads to anopen infinite regress, which needs infinities to overcome allpossiblemachines.I think it explains everything.Explains just one thing, just to see.I don't see any infinities at all.Then we are Turing emulable.
There can't be finite non-comps?
Matter is what glue the machine dreams,I think that it is obviously not. If we were machines and that weretrue, then we should come out of the womb filled with intuitionsaboutelectronics, chemistry, and mathematics, not ghosts and spacemonsters. Dreams are not material, they are living subjectivefeelings. Matter is what is too boring and repetitive to be dreamedof. Too tiny and too vast, too hot and cold, dense and ephemeral fordreams. Dream bullets don't make much of an impact. Dream injuriesdon't have to heal.You beg the question.I don't see how.Because you say that dream bullet does not do injuries, but compexplains that virtual bullet can injured a virtual observer.
But that doesn't play out experimentally. In a dream virtual bullets
can have ambiguous effects, no effects, instant healing, etc.
So as anargument, you are just saying -that we are not virtual, withoutexplanations.
No, I'm saying that we are not only virtual, we are actual as well.
The explanation is that we can conceptualize a difference between
dream and reality - regardless of the veracity of that difference.
Determinism and comp would have no use for a concept of non-
simulation.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
(next installment)
On Sep 23, 3:17 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:On 23 Sep 2011, at 02:42, Craig Weinberg wrote:It is a comparison made by a thirdperson observer of a human presentation against their expectations ofsaid human presentation. Substitution 'level' similarly implies thatthere is an objective standard for expectations of humanity. I don'tthink that there is such a thing.It all depend what you mean by "objective standard".
That there is some kind of actual set of criteria which make the
difference between human and non human.
Some apes probably have more
human qualities than some humans,
and some artificial brain extensions
will probably have more human qualities than others. I don't think
that it's likely to be able to replace a significant part of the brain
with digital appliances though.
I would compare it with body
prosthetics. An organ here or a limb there, sure, but replacing a
spinal cord or brain stem with something non-biological is probably
not going to work.
The technical notion of zombie does not rely on comp. It is just ahuman, acting normally, but which is assumed to be without any innerlife. Non-comp + current data makes them plausible, which is anargument for comp.
I think that the whole premise is too flawed to be useful in practical
considerations.
It posits that there is a such thing as 'acting
normally'. The existence of sociopathy indicates that there are
naturally occurring 'partial zombies' already to the extent that it
means anything, but that the concept or p-zombies itself assumes that
human 'normalcy' can be ascertained by observing moment to moment
behavior rather than over a lifetime. A fully digital person, like
digital music, may satisfy many of the qualities we associate with a
person, but always carry with them a clear inauthenticity which seems
aimless and empty. If they are simulating a person who is already like
that, then it could be said that they have achieved substitution
level, but it's not really a robust test of the principle.
If you don't assumethat substance can be separated from function completely, then thereis no meaning to the idea of zombies. It's like dehydrated water.I am rather skeptical on substance. But I tend to believe in waves andparticles, because group theory can explain them. But I don't needsubstance for that. And with comp, ther is no substance that we canrelated, even indirectly, to consciousness. I see the notion ofsubstance as the Achilles' heel of the Aristotelian theories.But if you are saying that zombies cannot exist,
doesn't that mean
that positing a substance that automatically is associated with a
particular set of functions. Otherwise you could just program
something to behave like a zombie.
To say that comp prevents zombies is actually a self-defeating
argument I think.
It seems to violate the principle of universal
emulation so that you could not, for instance have one digital person
which was the virtualized slave of another, because the second digital
body would be, in effect, a zombie. This seems to inject a special
case of arbitrary Turing limitation. Consider the example of remote
desktop software, where we can shell out one computer to another. What
happens to the host computer's 'consciousness'? Does it not become a
partial zombie, unable to locally control it's behavior?
There is nosubstitution 'level', only a ratios of authenticity.?Say a plastic plant is 100% authentic at a distance of 50 years by thenaked eye, but 20% likely to be deemed authentic at a distance ofthree feet. Some people have better eyesight. Some are more familiarwith plants. Some are paying more attention. Some are onhallucinogens, etc There is no substitution level at which a plasticplant functions 100% as a plant from all possible observers throughall time and not actually be a genuine plant. Substitution as aconcrete existential reality is a myth. It's just a question ofarbitrarily fixing an acceptable set of perceptions by particularcategories of observers and taking it for functional equivalence.An entity suffering in a box, does suffer, independently of anyobservers.
A box can contain a body, but it's not clear that it can contain the
experience with that body. Sensory isolation in humans leads to rapid
escape into the imagining psyche. But if you want to stick with a flat
read of the example, we could say that the box is an observer, at
least to the extent that it's existence must resist the escape of the
trapped entity.
The closer yoursubstitute is to native human sensemaking material, the more of thebrain can be replaced with it, but with diminishing returns at highlevels so that complete replacement would not be desirable.That is even worst. This entails partial zombies. It does not makesense. I remind you that zombie, by definition, cannot be seen assuchby their behavior at all.That's the theoretical description of a zombie. Like dehydrated water.In reality, one observer's zombie is another observer's non-descriptstranger in the park. There is no validity to these observationsrelative to the would-be zombie's quality of subjectivity.We are always talking in a theory. Reality is what we search.
My point was that zombies as they are described cannot exist, and that
the real world principle the device of zombiehood is intended to
defeat is not even addressed.
We better use the contemporary image to help people see the validityof argument, but I could reason with clock-wheels like machine. Thekey point is the mathematical notion of universality (for computation).
It would be more interesting to use the clock-wheel version so that
people could see the invalidity of the argument ;)
Everyone agree that if the level is infinitely low, then currentphysics is false. To speculate that physics is false for makingmachine stupid is a bit of far stretching.Physics isn't false, it's just incomplete.No, it has to false for making the substitution level infinitely low.*ALL* theories, including the many one trying to marry gravitationandthe quantum entails its Turing emulability.The substitution level isn't infinitely low, it's just not applicableat all. There is no substitution level of white for black, lead forgold, up for down, etc. I doubt the objective existence of'substitution'. Substitution is an interpretation - not necessarily avoluntary one, but an interpretation nonetheless.So, what will you say if your daughter accept an artificial brain?Substitution is an operational term, like castration, lobotomy, etc.
I would say she would be committing suicide, unless the technology had
already been tested with people being gradually offloaded and reloaded
into their own brains to verify the retention of consciousness.
Honestly I don't think it's going to come to that. I think the
limitations of mechanism to generate human sentience will be revealed
experimentally long before anyone considers replacing an entire brain.
A good Eurocentric map ofthe world before the Age of Discovery isn't false, just notapplicableto the other hemisphere.The analogy fails to address the point I made.If the point you made is that physics has to be false if the humanpsyche has no substitution level, then my analogy is that a map ofknown territory (physics) doesn't have to be false just because itdoesn't apply to an unknown territory (psyche).Physics is not false. But physicalism, or weak materialism, isincompatible with mechanism.I don't so much get into philosophical conventions. Mechanism,
physicalism, materialism etc, are just splinters of a single faith to
me. They are all rooted in 1-p supervenience ontological assumptions,
which I don't use. My view is not dualism because 1-p subjectivity is
neither substance nor-non substance, but rather it is the perceptual
experience of the sensorimotive energy within and between substance.
It cannot be conceived of properly as an object or noun. Terms like
'soul' or 'consciousness' are necessary for us to represent it
linguistically, but the actual referent is a verb. It is to feel,
experience, and do.
2) It is true that computation needs much less than psyche, indeed, itdoes not need psyche, but that is why comp is a real explanation: itexplains the existence of psyche (what the machine thinks about)without assuming psyche.You say comp is false, because you believe that we can explain psycheonly by assuming psyche. What you say is "psyche cannot be explained(without psyche).
That psyche cannot be explained is only one factor, and not the most
important one, which leads me in the direction of comp being false.
Some others are:
1) I am compelled by the symmetry and cohesiveness of a Sensorimotive-
Electromagnetic Perceptual Relativity rooted in matter, space, and
entropy as the involuted consequence of energy, time, and
significance.
2) I am compelled by our naive perception of being 'inside of' our
physical heads and bodies, rather than inside of a disembodied logical
process - even with a simulation hypothesis, our ability to experience
varying degrees of sanity and dissociation rather than a real world
which is indistinguishable from a structured dream.
3) I am compelled by the transparency of our proprioceptive
engagement. Even though our perception can be shown to have a
substitution level, our ordinary experience is quite reliable at
informing the psyche of it's material niche. We don't usually
experience dropouts or pixelation, continuity errors, etc. It's not
perfect, but our ability to communicate with each other across many
different logical encodings and substances without any other entity
interfering is a testament to the specificity of human consciousness
to the precise fusion of physical neurology and psychic unity.
4) All of the aesthetic hints bound up in our fictions of the unlive
and the undead, as well as the stereotypes of cold, empty mechanism.
Consistent themes in science fiction and fantasy. Again suggesting a
mind-body pseudo-duality rather than an arithmetic monism.
5) The clues in human development, with childhood seeing innate
grounding in tangible sensorimotive participation rather than
universal, self-driven sui generis mathematical facility. It takes
years for us to develop to the point where we can be taught counting,
addition, and multiplication.
6) The lack of real world arithmetic phenomena independent of matter.
I think that arithmetic seems like an independent epistemology because
it is a distillation of the kind of orders and symmetries which we
share with matter, and as such is both distant from the most
subjective experiences of the psyche, but also non-corporeal since it
is in fact a sensorimotive projection. It's basically a dimension of
literal sense we can experience with matter, but it is still just one
channel of sense which will not automatically give rise to non-
theoretical phenomenologies.98% of the scientist are wrong on the consequence of comp. They useitas a materialist solution of the mind-body problem. You are nothelping by taking their interpretation of comp as granted, andtalkinglike if you were sure that comp is false. Why not trying to get thecorrect understanding of comp before criticizing it?If 98% of scientists who study comp are wrong about it's consequences,what chance would I have of beating them at their own game? It's notthat I know comp is false, it's that I have a different hypothesiswhich recontextualizes the relation of comp to non-comp and makes moresense to me than comp (or my misunderstanding of comp).It is your right. I just do not follow your argument against comp.
Which one?
You might as well use UDA to say that comp implies non-materialism,and I postulate matter, so I postulate non-comp.The problem, for me, is that such a move prevents the search for anexplanation of matter, and mind.
I don't understand what is the appeal of making arithmetic unexplained
and primitive instead.
I have confidence in the relation betweencomp and non-comp. That is the invariance, the reality, and a theoryof Common Sense.comp gives a crucial role to no-comp.Meaning that it is a good host to us as guests in it's universe. Idon't think that's the case. This universe is home to us all and weare all guests as well (guests of our Common Sense)?It makes us strangers in an arithmetic universe.
It needsfluids - water, cells.Clothes.Would you say that H2O is merely the clothes of water, and that watercould therefore exist in a 'dehydrated' form?Sure. I do this in dreams. Virtual water gives virtual feeling ofwetness with great accuracies.
Virtual water doesn't do all of the things that real water does
though. It's just a dynamic image and maybe some tactile sense. It
doesn't have to boil or evaporate, doesn't quench thirst, etc.
I agree
that some of our sense of water is reproduced locally in the psyche,
but it is clearly a facade of H2O.
Something that lives and dies and makes a mess.Universal machine are quite alive, and indeed put the big mess inplatonia.What qualities of UMs make them alive?The fact that they are creative, reproduce, transform themselves, areattracted by God, sometime repulsed by God also, and that they candiagonalize against all normative theories made about them. And manymore things.
It sounds worthwhile but I would need to see some demos and
experiments dumbed down for laymen to have an opinion.
You just said all of
this great stuff that UM can do which is just like us, and then your
one example of this is you and me ourselves?
What is the point of
saying that we are like ourselves and what would that have to do with
supporting mechanism?
How does the brain understand these things if it has no access tothepapers?Comp explains exactly how things like papers emerge from thecomputation. The explanation is already close to Feynman formulationof QM.Unfortunately this sounds to me like "Read the bible and yourquestions will be answered."Read sane04.http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHALAbstract...
I have. I like it but I can only get so far and I like my own ideas
better.
But you don't seem serious in "arguing" against comp, and admittingyou don't know anything in computer science.Oh I freely admit that I don't know anything in computer science. Mywhole point is that computer science only relates to half ofreality.I don't know anything about X. My whole point is that X only do this.But if you admit knowing nothing about X, how can you derive anythingabout X.You are just confessing your prejudice.I don't know anything about ship building but I know that it onlyconcerns seafaring and not aerospace. I think that being a mastershipwright could very well present a significant obstacle topioneering in aerospace.That's not an argument. At the most a hint for low level substitution.
It's an example of why arguments from authority to not compel me in
this area.
I'm not trying to make the universe fit into a computer sciencetheory. I only argue against comp because it's what is distractingyoufrom seeing the bigger picture.I show, in short that comp leads to Plotinus. If that is not a bigpicture!Comp explains conceptually, and technically, the three Gods of thegreek, the apparition of LUMs and their science and theologies, thedifference between qualia and quanta, sensation and perception,perception and observation.I believe you but i get to those things without vaporizing substance.Which means you are affectively attached to the bullet of Aristotle.Substance is an enigma. Something we have to explain, ontologically,or epistemologically.
I think that I have explained substance. It is the opposite of
perception over time. Perceptual obstacles across space. Together they
form an involuted pseudo-dualism.
You just criticize a theory that you admit knowing nothing about.Thisis a bit weird.My purpose is not to criticize the empire of comp, it is to point tothe virgin continent of sense.So you should love comp, because it points on the vast domain ofmachine's sense and realities.
I do love it in theory. It's a whole new frontier to explore. It's
just not the one I'm interested or qualified to explore.
The 3-pview of schematizing the belief of a thing is a second orderconception to the 1-p primitive of what it is to feel that one *is*.Well, not in the classical theory of beliefs and knowledge.It's an experience with particular features - a sensory input and amotive participation. Without that foundation, there is nothing tomodel.That's unclear. The "p" in Bp & p might play that role, as I thoughtyou grasped above.
You can't start with doubting the self, because logically that would
invalidate the doubt and fall into infinite regress. It's not even
possible to consider because the Ur-arithmetic would have nothing to
experience it.
Give me one example, one common sense metaphor,one graphed function that could suggest to me that there is anybelief, feeling, or awareness.going on.The fact that the universal machine remains silent on the deepquestionWhat deep question?'are you consistent?", "do you believe in a reality", "do youbelievein after life", etc.Have you considered that it's silent because it's not equipped toanswer the question?yes, but it does not work. The machine cannot answer the question fordeeper reason, that she can find and expose.For example the machine remains silent in the question "are youconsistent", but later she can say that "If I am consistent, then Iwill remain silent on the question of my consistence".Meh. It sounds like asking a spirit 'if you can't hear me, do NOTknock three times'No. It is more like if you ask a spirit a too intimate question. Onanother question, he does knock three times, and then he can explainwhy he did not knocked it earlier.
It makes no sense that all machine spirits would be so touchy about
their religious beliefs.
Would that mean that any person being
digitally simulated by a UM would also be unable to answer
philosophical questions?
I say thatsubstitution level does not apply. I think that to prove substitutionlevel existsComp implies that no one can prove it exists. No machine can know forsure its substitution level, even after a successful teleportation.She might believe that she has 100% survived but suffer fromanosognosia.
I can understand what you are saying, and I agree that it is a good
way of modeling why a self-referencing entity would not be able to get
behind itself, but it seems like a contradiction. If you say that we
are machines, then you are saying that we cannot know for sure our
substitution level, which is exactly what you are criticizing me on.
If a machine cannot know for sure their subst level, does it know for
sure that the level is not infinite?
If not, then comp itself is not
Turing emulable?
Comp isn't false, it just doesn't recognizethe contribution of the non-comp substrate of computation,It does. I insist a lot on this. Comp is almost the needed philosophyfor curing the idea that everything is computable.Please study the theory before emitting false speculation on it.So you are saying that comp supervenes on or is equally fundamental asnon-comp?Arithmetical truth can be partitioned into level of complexity,sigma_1, sigma_2, sigma_3, etc...The computable is sigma_0 and sigma_1. Above it is uncomputable. Mostmeta-properties on the sigma_1 are above sigma_1.The numbers relations escapes the computable, and to make a theory ofthe computable, we cannot avoid excursions in the non computable. Wecan always prove that a machine stop without leaving the sigma_1, butto prove just that some machine will not stop, is a quite anothermatter.Does comp explain why sigma_2 becomes uncomputable, and what
computability actually is?
so it's notapplicable for describing certain kinds of consciousness where non-comp is more developed.Consciousness and matter are shown by comp to be highly noncomputable. So much that the mind-body problem is transformed into aproblem of justifying why the laws of physics seems to be computable.I think they not only seem to be computable but they are computable,and that this is due to how sensorimotive orientation works.Hmm... Then you can compute if you will see a photon in the up statestarting from the superposition (up + down)?
No, a photon (if it existed, which I don't think it does) is
completely outside of our perceptual inertial frame. Which is why it
seems to do unusual things because we are seeing it secondhand though
photomultipliers or other equipment which cannot report to us anything
which cannot be represented in the very limited common sense we share
with glass and steel. Within our naive perceptual niche, our laws of
physics are computable.
It's notjust a solipsistic simulation, it's a trans-solipsistic localization.You mean a first plural localization? Those are not computable,assuming either comp or QM.
Not necessarily plural, just that, for example, when I look at the sun
with my eye, there is a sun sense localization taking place within the
eyeball, retina, visual cortex, and perceiver. They are all different
materials and aspects of materials but they are all imitating, to the
extent that they can, the meaning of the 3-p event they observe. I
realize there is a certain sequence and logic to all of this from 3-p
which looks like chain reaction on microcosmic analysis, but from 1-p
it is a synchronized gestalt which is local but also concretely
entangles the many systems.
to be continued...
OK, so you agree that the *observable* behaviour of neurons can be
adequately explained in terms of a chain of physical events. The
neurons won't do anything that is apparently magical, right?
>> At times you have said that thoughts, over
>> and above physical events, have an influence on neuronal behaviour.
>> For an observer (who has no access to whatever subjectivity the
>> neurons may have) that would mean that neurons sometimes fire
>> apparently without any trigger, since if thoughts are the trigger this
>> is not observable.
>
> No. Thoughts are not the trigger of physical events, they are the
> experiential correlate of the physical events. It is the sense that
> the two phenomenologies make together that is the trigger.
>
>> If, on the other hand, neurons do not fire in the
>> absence of physical stimuli (which may have associated with them
>> subjectivity - the observer cannot know this)
>
> We know that for example, gambling affects the physical behavior of
> the amygdala. What physical force do you posit that emanates from
> 'gambling' that penetrates the skull and blood brain barrier to
> mobilize those neurons?
The skull has various holes in it (the foramen magnum, the orbits,
foramina for the cranial nerves) through which sense data from the
environment enters and, via a series of neural relays, reaches the
amygdala and other parts of the brain.
>> But if thoughts influence behaviour and thoughts are not observed,
>> then observation of a brain would show things happening contrary to
>> physical laws,
>
> No. Thought are not observed by an MRI. An MRI can only show the
> physical shadow of the experiences taking place.
That's right, so everything that can be observed in the brain (or in
the body in general) has an observable cause.
>>such as neurons apparently firing for no reason, i.e.
>> magically. You haven't clearly refuted this, perhaps because you can
>> see it would result in a mechanistic brain.
>
> No, I have refuted it over and over and over and over and over. You
> aren't listening to me, you are stuck in your own cognitive loop.
> Please don't accuse me of this again until you have a better
> understanding of what I mean what I'm saying about the relationship
> between gambling and the amygdala.
>
> "We cannot solve our problems with the same thinking we used when we
> created them" - A. Einstein.
You have not answered it. You have contradicted yourself by saying we
*don't* observe the brain doing things contrary to physics and we *do*
observe the brain doing things contrary to physics. You seem to
believe that neurons in the amygdala will fire spontaneously when the
subject thinks about gambling, which would be magic. Neurons only fire
in response to a physical stimulus. That the physical stimulus has
associated qualia is not observable: a scientist would see the neuron
firing, explain why it fired in physical terms, and then wonder as an
afterthought if the neuron "felt" anything while it was firing.
>> A neuron has a limited number of duties: to fire if it sees a certain
>> potential difference across its cell membrane or a certain
>> concentration of neurotransmitter.
>
> That is a gross reductionist mispresentation of neurology. You are
> giving the brain less functionality than mold. Tell me, how does this
> conversation turn into cell membrane potentials or neurotransmitters?
Clearly, it does, since this conversation occurs when the neurons in
our brains are active. The important functionality of the neurons is
the action potential, since that triggers other neurons and ultimately
muscle. The complex cellular apparatus in the neuron is there to allow
this process to happen, as the complex cellular apparatus in the
thyroid is to enable secretion of thyroxine. An artificial thyroid
that measured TSH levels and secreted thyroxine accordingly could
replace the thyroid gland even though it was nothing like the original
organ in structure.
>>That's all that has to be
>> simulated. A neuron doesn't have one response for when, say, the
>> central bank raises interest rates and another response for when it
>> lowers interest rates; all it does is respond to what its neighbours
>> in the network are doing, and because of the complexity of the
>> network, a small change in input can cause a large change in overall
>> brain behaviour.
>
> So if I move my arm, that's because the neurons that have nothing to
> do with my arm must have caused the ones that do relate to my arm to
> fire? And 'I' think that I move 'my arm' because why exactly?
The neurons are connected in a network. If I see something relating to
the economy that may lead me to move my arm to make an online bank
account transaction. Obviously there has to be some causal connection
between my arm and the information about the economy. How do you
imagine that it happens?
> If the brain of even a flea were anywhere remotely close to the
> simplistic goofiness that you describe, we should have figured out
> human consciousness completely 200 years ago.
Even the brain of a flea is very complex. The brain of the nematode C
elegans is the simplest brain we know, and although we have the
anatomy of its neurons and their connections, no adequate computer
simulation exists because we do not know the strength of the
connections.
>> In theory we can simulate something perfectly if its behaviour is
>> computable, in practice we can't but we try to simulate it
>> sufficiently accurately. The brain has a level of engineering
>> tolerance, or you would experience radical mental state changes every
>> time you shook your head. So the simulation doesn't have to get it
>> exactly right down to the quantum level.
>
> Why would you experience a 'radical' mental state change? Why not just
> an appropriate mental state change? Likewise your simulation will
> experience an appropriate mental state to what is being used
> materially to simulate it.
There is a certain level of tolerance in every physical object we
might want to simulate. We need to know a lot about it, but we don't
need accuracy down to the position of every atom, for if the brain
were so delicately balanced it would malfunction with the slightest
perturbation.
>> My point was that even a simulation of a very simple nervous system
>> produces such a fantastic degree of complexity that it is impossible
>> to know what it will do until you actually run the program. It is,
>> like the weather, unpredictable and surprising even though it is
>> deterministic.
>
> There is still no link between predictability and intentionality. You
> might be able to predict what I'm going to order from a menu at a
> restaurant, but that doesn't mean that I'm not choosing it. You might
> not be able to predict a tsunami, but that doesn't mean it's because
> the tsunami is choosing to do something. The difference, I think, has
> to do with more experiential depth in between each input and output.
Whether something is conscious or not has nothing to do with whether
it is deterministic or predictable.
>> > I understand perfectly why you think this argument works, but you
>> > seemingly don't understand that my explanations and examples refute
>> > your false dichotomy. Just as a rule of thumb, anytime someone says
>> > something like "The only way out of this (meaning their) conclusion "
>> > My assessment is that their mind is frozen in a defensive state and
>> > cannot accept new information.
>>
>> You have agreed (sort of) that partial zombies are absurd
>
> No. Stuffed animals are partial zombies to young children. It's a
> linguistic failure to describe reality truthfully, not an insight into
> the truth of consciousness.
This statement shows that you haven't understood what a partial zombie
is. It is a conscious being which lacks consciousness in a particular
modality, such as visual perception or language processing, but does
not notice that anything is abnormal and presents no external evidence
that anything is abnormal. You have said a few posts back that you
think this is absurd: when you're conscious, you know you're
conscious.
>>and you have
>> agreed (sort of) that the brain does not do things contrary to
>> physics. But if consciousness is substrate-dependent, it would allow
>> for the creation of partial zombies. This is a logical problem. You
>> have not explained how to avoid it.
>
> Consciousness is not substrate-dependent, it is substrate descriptive.
> A partial zombie is just a misunderstanding of prognosia. A character
> in a computer game is a partial zombie.
A character in a computer game is not a partial zombie as defined
above. And what's prognosia? Do you mean agnosia, the inability to
recognise certain types of objects? That is not a partial zombie
either, since it affects behaviour and the patient is often aware of
the deficit.
>> Would it count as "internal motives" if the circuit were partly
>> controlled by thermal noise, which in most circuits we try to
>> eliminate? If the circuit were partly controlled by noise it would
>> behave unpredictably (although it would still be following physical
>> laws which could be described probabilistically). A free will
>> incompatibilist could then say that the circuit acted of its own free
>> will. I'm not sure that would satisfy you, but then I don't know what
>> else "internal motives" could mean.
>
> These are the kinds of things that can only be determined through
> experiments. Adding thermal noise could be a first step toward an
> organic-level molecular awareness. If it begins to assemble into
> something like a cell, then you know you have a winner.
What is special about a cell? Is it that it replicates? I don't see
that as having any bearing on intelligence or consciousness. Viruses
replicate and I would say many computer programs are already more
intelligent than viruses.
>> The outcome of the superbowl creates visual and aural input, to which
>> the relevant neurons respond using the same limited repertoire they
>> use in response to every input.
>
> There is no disembodied 'visual and aural input' to which neurons
> respond. Our experience of sound and vision *are* the responses of our
> neurons to their own perceptual niche - cochlear vibration summarized
> through auditory nerve and retinal cellular changes summarized through
> the optic nerve are themselves summarized by the sensory regions of
> the brain.
>
> The outcome of the superbowl creates nothing but meaningless dots on a
> lighted screen. Neurons do all the rest. If you call that a limited
> repertoire, in spite of the fact that every experience of every living
> being is ecapsulated entirely with it, then I wonder what could be
> less limited?
The individual neurons have a very limited repertoire of behaviour,
but the brain's behaviour and experiences are very rich. The richness
comes not from the individual neurons, which are not much different to
any other cell in the body, but from the complexity of the network. If
you devalue the significance of this then why do we need the network?
Why do we need a brain at all - why don't we just have a single neuron
doing all the thinking?
>> Intelligent organisms did as a matter of fact evolve. If they could
>> have evolved without being conscious (as you seem to believe of
>> computers) then why didn't they?
>
> Because the universe is not all about evolution. We perceive some
> phenomena in our universe to be more intelligent than others, as a
> function of what we are. Some phenomena have 'evolved' without much
> consciousness (in our view) - minerals, clouds of gas and vapor, etc.
The question is, why did humans evolve with consciousness rather than
as philosophical zombies? The answer is, because it isn't possible to
make a philosophical zombie since anything that behaves like a human
must be conscious as a side-effect.
>> No, the movie would repeat, since the digits of pi will repeat.
>
> The frames of the movie would not repeat unless you limit the sequence
> of frames to an arbitrary time.
Yes, a movie of arbitrary length will repeat. But consciousness for
the subject is not like a movie, it is more like a frame in a movie. I
am not right now experiencing my whole life, I am experiencing the
thoughts and perceptions of the moment. The rest of my life is only
available to me should I choose to recall a particular memory. Thus
the thoughts I am able to have are limited by my working memory - my
RAM, to use a computer analogy.
>> After
>> a 10 digit sequence there will be at least 1 digit repeating, after a
>> 100 digit sequence there will be at least a digit pair repeating,
>> after a 1000 digit sequence there will be at least a triplet of digits
>> repeating and so on. If you consider one minute sequences of a movie
>> you can calculate how long you would have to wait before a sequence
>> that you had seen before had to appear.
>
> The movie lasts forever though. I don't care if digits or pairs
> repeat, just as a poker player doesn't stop playing poker after he's
> seen what all the cards look like, or seen every winning hand there
> is.
Yes, but the point was that a brain of finite size can only have a
finite number of distinct thoughts.
>> If the Internet is implemented on a finite computer network then there
>> is only a finite amount of information that the network can handle.
>
> Only at one time. Giving an infinite amount of time, there is no limit
> to the amount of 'information' that it can handle.
As I explained:
>> For simplicity, say the Internet network consists of three logic
>> elements. Then the entire Internet could only consist of the
>> information 000, 001, 010, 100, 110, 101, 011 and 111.
>> Another way to
>> look at it is the maximum amount of information that can be packed
>> into a certain volume of space, since you can make computers and
>> brains more efficient by increasing the circuit or neuronal density
>> rather than increasing the size. The upper limit for this is set by
>> the Bekenstein bound (http://en.wikipedia.org/wiki/Bekenstein_bound).
>> Using this you can calculate that the maximum number of distinct
>> physical states the human brain can be in is about 10^10^42; a *huge*
>> number but still a finite number.
>
> The Bekenstein bound assumes only entropy, and not negentropy or
> significance. Conscious entities export significance, so that every
> atom in the cosmos is potentially an extensible part of the human
> psyche. Books. Libraries. DVDs. Microfiche. Nanotech.
Negetropy has a technical definition as the difference between the
entropy of a system and the maximum possible entropy. It has no
bearing on the Bekenstein bound, which is the absolute maximum
information that a volume can contain. It is a hard physical limit,
not disputed by any physicist as far as I am aware. Anyway, it's
pretty greedy of you to be dissatisfied with a number like 10^10^42,
which if it could be utilised would allow one brain to have far more
thoughts than all the humans who have ever lived put together.
>> That's right, we need only consider a substance that can successfully
>> substitute for the limited range of functions we are interested in,
>> whether it be cellular communication or cleaning windows.
>
> Which is why, since we have no idea what the ranges of functions or
> dependencies are contained in the human psyche, we cannot assume that
> watering the plants with any old clear liquid should suffice.
We need to know what the functions are before we can substitute for them.
>> But TV programs can be shown on a TV with an LCD or CRT screen. The
>> technologies are entirely different, even the end result looks
>> slightly different, but for the purposes of watching and enjoying TV
>> shows they are functionally identical.
>
> Ugh. Seriously? You are going to say it in a universe of only black
> and white versus color TVs, it's no big deal if it's in color or not?
> It's like saying that the difference between a loaded pistol blowing
> your brains out and a toy water gun are that one is a bit noisier and
> messier than the other. I made my point, you are grasping for straws.
The function of a black and white TV is different from that of a
colour TV. However, the function of a CRT TV is similar to that of an
LCD TV (both colour) even though the technology is completely
different.
>> Differences such as the weight
>> or volume of the TV exist but are irrelevant when we are discussing
>> watching the picture on the screen, even though weight and volume
>> contribute to functional differences not related to picture quality.
>> Yes, no doubt it would be difficult to go substituting cellular
>> components, but as I have said many times that makes no difference to
>> the functionalist argument, which is that *if* a way could be found to
>> preserve function in a different substrate it would also preserve
>> consciousness.
>
> Of course, the functionalist argument agrees with itself. If there is
> a way to do the impossible, then it is possible.
It's not impossible, there is a qualitative difference between
difficult and impossible. It would be difficult for humans to build a
planet the size of Jupiter, but there is no theoretical reason why it
could not be done. On the other hand, it is impossible to build a
square triangle, since it presents a logical contradiction. There is
no logical contradiction in substituting the function of parts of the
human body. Substituting one thing for another to maintain function is
one of the main tasks to which human intelligence is applied.
>> That's right, since the visual cortex does not develop properly unless
>> it gets the appropriate stimulation. But there's no reason to believe
>> that stimulation via a retina would be better than stimulation from an
>> artificial sensor. The cortical neurons don't connect directly to the
>> rods and cones but via ganglion cells which in turn interface with
>> neurons in the thalamus and midbrain. Moreover, the cortical neurons
>> don't directly know anything about the light hitting the retina: the
>> brain deduces the existence of an object forming an image because
>> there is a mapping from the retina to the visual cortex, but it would
>> deduce the same thing if the cortex were stimulated directly in the
>> same way.
>
> No, it looks like it doesn't work that way:
> http://www.mendeley.com/research/tms-of-the-occipital-cortex-induces-tactile-sensations-in-the-fingers-of-blind-braille-readers/
That is consistent with what I said.
>> It is irrelevant to the discussion whether the feeling of free will is
>> observable from the outside. I don't understand why you say that such
>> a feeling would have "no possible reason to exist or method of arising
>> in a deterministic world". People are deluded about all sorts of
>> things: what reason for existing or method of arising do those
>> delusions have that a non-deterministic free will delusion would lack?
>
> Because free will in a deterministic universe would not even be
> conceivable in the first place to have a delusion about it. Even
> delusional minds can't imagine a square circle or a new primary color.
You're saying that free will in a deterministic world is
contradictory. That may be the case if you define free will in a
particular way (and not everyone defines it that way), but still that
does not imply that the *feeling* of free will is incompatible with
determinism.
>> This is your guess, but if everything has qualia then perhaps a
>> computer running a program could have similar, if not exactly the
>> same, qualia to those of a human.
>
> Sure, and perhaps a trash can that says THANK YOU on it is sincerely
> expressing it's gratitude. With enough ecstasy, it very well might
> seem like it does. Why would that indicate anything about the native
> qualia of the trash can?
That's not an argument. There is no logical or empirical reason to
assume that the qualia of a computer that behaves like you cannot be
very similar to your own. Even if you believe qualia are
substrate-dependent, completely different materials can have the same
physical properties, so why not the same qualia?
--
Stathis Papaioannou
> On Sep 25, 10:33 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>> On 25 Sep 2011, at 02:51, Craig Weinberg wrote:
>>
>>> (next installment)
>>
>>> On Sep 23, 3:17 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
>>
>>>> On 23 Sep 2011, at 02:42, Craig Weinberg wrote:
>>
>>>>> It is a comparison made by a third
>>>>> person observer of a human presentation against their expectations
>>>>> of
>>>>> said human presentation. Substitution 'level' similarly implies
>>>>> that
>>>>> there is an objective standard for expectations of humanity. I
>>>>> don't
>>>>> think that there is such a thing.
>>
>>>> It all depend what you mean by "objective standard".
>>
>>> That there is some kind of actual set of criteria which make the
>>> difference between human and non human.
>>
>> You really frighten me. The last time I read something similar was
>> when I read My Kampf by A. Hitler, notably on the handicapped people,
>> homosexual, jews, etc.
>
> Oof. We've hit the Godwin Law Limit. http://en.wikipedia.org/wiki/Godwin%27s_law
>
> But I'm the one saying that there isn't a substitution level.
> You've
> been the one telling me that there must be.
So you are the one saying that a human with a prosthetic brain is no
more human.
> I'm saying that we are
> both mechanical and non-mechanical,
That is a contradiction. Would you say "yes and no" to the doctor?
> but you are saying that all non-
> mechanical must be reducible to the consequences of mechanism. That
> sounds much more like epistemological fascism to me.
The 3-I is mechanical (comp assumption), and the 1-I is not
(consequence).
Why postulate something when we can derive it from simpler assumption.
It is not fascism, it is Occam razor.
I agree. This follows from comp. but the artifical brain can do as
much as the biological brain in making possible for a genuine
consciousness to manifest itself locally. The 1-p related to that
brain will have its "futures" deterline by the statistics on all
computations though.
> The brain is the common sense of what we are and the psyche is the
> uncommon sense of what we are. They overlap through their sharing of
> sense, and through their symmetrical divergence from each other, and
> they underlap through their separate developmental encounters through
> chance.
Perhaps. But not relevant to negate comp.
>
> If you build a brain based upon only the common sense without
> factoring in the symmetrical divergence and the underlapping non-
> sense, then I think that you get a device with all of the capability
> to feel and think as a complicated alarm clock.
You beg the question. you just say: my personal opinion is that a
brain is not a machine.
Not at all. Comp justifies it in the sense that it is a logical
consequence. No assumptions are needed above CT+YD. (Church thesis +
"yes doctor").
> that in fact feeling is no different from computing.
This shows you have not study the paper. Computating is given by the
Bp (with p sigma_1). feelings are given by Bp & Dt & p, with p
sigma_1. This obeys quite different logics.
> It's a fair theoretical-philosophical proposition, but it insists upon
> a flat computational read of reality to begin with, thus disqualifying
> any possibility for 1-p authority.
"1p-authority" is preserved and explained by all modalities with "&
p" (S4Grz, x, x*, x1, x1*).
>>
>>
>>
>>>>> If you don't assume
>>>>> that substance can be separated from function completely, then
>>>>> there
>>>>> is no meaning to the idea of zombies. It's like dehydrated water.
>>
>>>> I am rather skeptical on substance. But I tend to believe in waves
>>>> and
>>>> particles, because group theory can explain them. But I don't need
>>>> substance for that. And with comp, ther is no substance that we can
>>>> related, even indirectly, to consciousness. I see the notion of
>>>> substance as the Achilles' heel of the Aristotelian theories.
>>
>>> But if you are saying that zombies cannot exist,
>>
>> I am not saying that. I am saying that non-comp + materialism entails
>> bizarre infinities and/or zombies.
>>
>>> doesn't that mean
>>> that positing a substance that automatically is associated with a
>>> particular set of functions. Otherwise you could just program
>>> something to behave like a zombie.
>>
>> ?
>> To program something acting like a zombie is the same as programming
>> something to behave like a human.
>
> So to be human really is to be a zombie, but to hallucinate that you
> are not?
This does not follow. What I said is a direct consequence of the
definition of zombie. It does not follow that human are zombie. It
suggests that a program behaving like a human is NOT a zombie. But
with non-comp, zombie can make sense.
OK. But this is not relevant for the negation of comp. The same
problem would occur with Aliens.
if some aliens invade earth to feed on humans, by survival necessity,
I guess they have the right. Of course we have the right to defend
ourselves, too.
>
>
>>
>>> It seems to violate the principle of universal
>>> emulation so that you could not, for instance have one digital
>>> person
>>> which was the virtualized slave of another, because the second
>>> digital
>>> body would be, in effect, a zombie. This seems to inject a special
>>> case of arbitrary Turing limitation. Consider the example of remote
>>> desktop software, where we can shell out one computer to another.
>>> What
>>> happens to the host computer's 'consciousness'? Does it not
>>> become a
>>> partial zombie, unable to locally control it's behavior?
>>
>> In *that* sense, all bodies are zombies. A body is always a construct
>> of minds. This is not obvious, and is related to the fact that comp
>> makes physicalness emerging from consciousness, which emerges from
>> the
>> infinities of number relations.
>>
> If all bodies are zombies, then non-comp + materialism would seem to
> be a foregone conclusion. It seems like you are making zombies and
> infinities bad when you want to scare us but good when they are the
> inevitable result of arithmetic.
With comp zombie does not make sense. And infinities are the usual one
of computer science, and the one coming from the first person
indeterminacy.
We don't know that. We can bet on it, like when saying "yes" to a
doctor.
Of course. But that form of "realness" might be (and is, assuming
comp) epistemological. Not substantial.
That would not be a proof, but it is OK to be very cautious.
There are no proof that comp is correct. We can only assume it.
This beg the question. Define material. Is it "primitively material"
or not?
Current evidence is that nature has already bet on comp. So comp might
be an essential component of life. By eating and defecating, we do
replace our material constitution all the time. We are already
immaterial patterns.
> If it wasn't
> essential, then there is a chance that it could never be discovered,
> in which case you could have infinite pockets of eternal ignorance
> which have no contact with the arithmetic truth they are made of. With
> sensorimotive truth, that isn't possible. The 1-p native experience is
> a standalone truth which nonetheless is 3-p truth permeable. Comp
> makes absolute, permanent solipsism a possibility. It is ultimately a
> prison theology.
I can give sense to this. But this shows comp a bit frightening, not
that comp is false.
>
>>
>>
>>> 4) All of the aesthetic hints bound up in our fictions of the unlive
>>> and the undead, as well as the stereotypes of cold, empty mechanism.
>>> Consistent themes in science fiction and fantasy. Again suggesting a
>>> mind-body pseudo-duality rather than an arithmetic monism.
>>
>> But arithmetic does explain phenomenological dualities (indeed
>> octalities).
>
> Does it describe itself as being outside of them?
Not really. Arithmetical truth is the trivial ontic modalities Vp <->
p. But it is not nameable by the machine. It plays the role of "God",
in many sense. It is not outside Arithmetic, but it is "outside" all
arithmetical machine.
>
>>
>>
>>
>>> 5) The clues in human development, with childhood seeing innate
>>> grounding in tangible sensorimotive participation rather than
>>> universal, self-driven sui generis mathematical facility. It takes
>>> years for us to develop to the point where we can be taught
>>> counting,
>>> addition, and multiplication.
>>
>> And? Don't confuse the human conception of numbers, and the numbers
>> as
>> a the intended matter subject of human studies.
>
> So it suggests that the ground of being which is relevant to us is not
> complicated abstract logic but simple concrete experience.
The ground is elementary number relation. Not experience. experience
and matter are emergent concept there. this is why comp makes
arithmetic the sufficient monist reality. Consciousness and matter are
internal perspective.
>>
>>
>>
>>> 6) The lack of real world arithmetic phenomena independent of
>>> matter.
>>
>> That is like Roger Granet's argument that if 17 exists, it has to
>> exists somewhere. But this beg the question. It posits at the start
>> that existence is physical existence.
>
> You can say that existence isn't physical, but why would 17 be any
> more likely to exist without physical existence?
You can't because physical appearance is a logical consequence of "17
and alike".
But we don't need primitive physical existence. We don't need, nor
use, an *assumption* that a basic physical reality primitively exist.
A bit like evolution makes useless the assumption that humans exist in
some privileged ways.
> You posit at the
> start that countingness exists independently of anything to actually
> count or anyone to count it. I understand the appeal, and I sort of
> started my TOE from there with 'patterns' being alive - but if that's
> all there was to it, there would be no point in having anything that
> seems physical at all.
UDA illustrates that you are mistaken there. The physical worlds
really exists ... in the head of Löbian machine, and this in a way
making mechanism testable.
That we can explain the appearance of perception and substance from
arithmetic, but we cannot do the reciprocal. In fact arithmetic cannot
be explained with less than arithmetic. This is non trivial to prove,
but is well known by mathematical logicians.
>
>>
>>
>>>>>>> I have confidence in the relation between
>>>>>>> comp and non-comp. That is the invariance, the reality, and a
>>>>>>> theory
>>>>>>> of Common Sense.
>>
>>>>>> comp gives a crucial role to no-comp.
>>
>>>>> Meaning that it is a good host to us as guests in it's universe. I
>>>>> don't think that's the case. This universe is home to us all and
>>>>> we
>>>>> are all guests as well (guests of our Common Sense)
>>
>>>> ?
>>
>>> It makes us strangers in an arithmetic universe.
>>
>> Not necessarily.
>
> Why not?
"Stranger" is a subjective local notion. With "time", we can get
familiarity. We can make friends with arithmetical creatures.
It has to appear. What is the point of doing babies? What is the point
of Saturn annulus?
What is the point of your point?
The physical reality is a consequence of 1+1=2 (to be short).
Independently if we like it or not.
>
>>
>>
>>
>>>>>>> It needs
>>>>>>> fluids - water, cells.
>>
>>>>>> Clothes.
>>
>>>>> Would you say that H2O is merely the clothes of water, and that
>>>>> water
>>>>> could therefore exist in a 'dehydrated' form?
>>
>>>> Sure. I do this in dreams. Virtual water gives virtual feeling of
>>>> wetness with great accuracies.
>>
>>> Virtual water doesn't do all of the things that real water does
>>> though. It's just a dynamic image and maybe some tactile sense. It
>>> doesn't have to boil or evaporate, doesn't quench thirst, etc.
>>
>> I am used to do coffee and tea in dreams.
>
> But you don't think that coffee can be accurately weighed or
> chemically analyzed, right? You don't think a dream thermometer is
> going to accurately measure it's temperature?
It depends which of the dreams. In the dream we are plausibly sharing
right now, I guess we can make such more precise analysis.
>
>>
>>> I agree
>>> that some of our sense of water is reproduced locally in the psyche,
>>> but it is clearly a facade of H2O.
>>
>> But that is enough. Actually we do live with that facade, even when
>> we
>> drink real water, once you accept that the brain is a
>> representational
>> machine. The "H2O" is not the subject of the substitution. The
>> human is.
>
> The facade of water won't let you survive in the desert. The brain has
> mechanisms but they are not doing any representing, rather they are
> the physical embodiment of a sensorimotive presentation.
You have escape all attempt to define what you mean by that.
> The
> presentation is a local isomorph of other presentations, but there is
> no transduction of electric signals into color images for instance.
> The signals are the physical shadow of the experience of the images.
>
>>
>>
>>
>>>>>>> Something that lives and dies and makes a mess.
>>
>>>>>> Universal machine are quite alive, and indeed put the big mess in
>>>>>> platonia.
>>
>>>>> What qualities of UMs make them alive?
>>
>>>> The fact that they are creative, reproduce, transform themselves,
>>>> are
>>>> attracted by God, sometime repulsed by God also, and that they can
>>>> diagonalize against all normative theories made about them. And
>>>> many
>>>> more things.
>>
>>> It sounds worthwhile but I would need to see some demos and
>>> experiments dumbed down for laymen to have an opinion.
>>
>> I did it, from the implementation of the modal logic in my long
>> version thesis. But the "dumbing" makes it non convincing, unless you
>> understand the program, and for this you need to dig harder on
>> mathematical logic.
>
> Maybe it's clarity that makes it non convincing? It's funny that you
> are saying I need to be much clearer but everything that I ask about
> your theory just gets me referred back to the academic source
> documents.
Because the work has already been done. Comp needs computer science,
you do have a big work to do for the understanding of its consequence.
You are the one pretending that comp is false, so you have to do the
work. It happens that all your argulent against comp can already be
defeated by universal machines, who actually shows that you have a
rather good intuition of what is going one. To bad you want it to be a
non-comp theory. But that is you problem, I try just to help.
Comp is a theological possibility.
UDA is an informal, yet rigorous, proof.
AUDA is not a proof at all, but an arithmetical rendering of the UDA
consequences in the language of machine, which makes the extraction of
physics technical and verifiable.
>
>>
>>> You just said all of
>>> this great stuff that UM can do which is just like us, and then your
>>> one example of this is you and me ourselves?
>>
>> I was addressing different points.
>
> Hmm.
>>
>>> What is the point of
>>> saying that we are like ourselves and what would that have to do
>>> with
>>> supporting mechanism?
>>
>> I was illustrating that a theory like mechanism does not have to
>> eliminate the person, which is what happen when materialist defend
>> mechanism.
>
> I think it retains the person in name only. The content is absent.
The question is why?
>>
>>
>>
>>>>>>> How does the brain understand these things if it has no access
>>>>>>> to
>>>>>>> the
>>>>>>> papers?
>>
>>>>>> Comp explains exactly how things like papers emerge from the
>>>>>> computation. The explanation is already close to Feynman
>>>>>> formulation
>>>>>> of QM.
>>
>>>>> Unfortunately this sounds to me like "Read the bible and your
>>>>> questions will be answered."
>>
>>>> Read sane04.http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHALAbstract
>>>> ...
>>
>>> I have. I like it but I can only get so far and I like my own ideas
>>> better.
>>
>> Then I encourage you to make them *much* clearer.
>>
> I'm certainly open to clarifying the language, but the ideas refer to
> phenomena which are actually unclear. The universe is not composed
> only of literal facts - it is a continuum of fictions which are sharp
> and clear on one end, fuzzy and entangled on the other.
The more a subject matter is difficult and unclear, the more you have
to be clear on it.
Not bad. That cannot be a primitive substance, then. Looks like the
comp notion of substance (like the formal BD# = B~B#), ans also like
the substance extracted in UDA, in informal term. Very good! But not a
problem for comp, on the contrary (if substance is awareness of
something, matter is already immaterial, unless you pretend that
awareness is material itself, which I'm afarid is what you are thinking.
>>
>>
>>
>>>>>> You just criticize a theory that you admit knowing nothing about.
>>>>>> This
>>>>>> is a bit weird.
>>
>>>>> My purpose is not to criticize the empire of comp, it is to
>>>>> point to
>>>>> the virgin continent of sense.
>>
>>>> So you should love comp, because it points on the vast domain of
>>>> machine's sense and realities.
>>
>>> I do love it in theory. It's a whole new frontier to explore. It's
>>> just not the one I'm interested or qualified to explore.
>>
>> But then stay neutral on it.
>>
> I can't because it's presenting itself as an obstacle to a deeper
> understanding of awareness.
You are making it an obstacle. I think the obstacle is not mechanism,
but the reductionist (pre-Gödelian) conception of mechanism.
Why?
>>
>>
>>>>> The 3-p
>>>>> view of schematizing the belief of a thing is a second order
>>>>> conception to the 1-p primitive of what it is to feel that one
>>>>> *is*.
>>
>>>> Well, not in the classical theory of beliefs and knowledge.
>>
>>>>> It's an experience with particular features - a sensory input
>>>>> and a
>>>>> motive participation. Without that foundation, there is nothing to
>>>>> model.
>>
>>>> That's unclear. The "p" in Bp & p might play that role, as I
>>>> thought
>>>> you grasped above.
>>
>>> You can't start with doubting the self, because logically that would
>>> invalidate the doubt and fall into infinite regress. It's not even
>>> possible to consider because the Ur-arithmetic would have nothing to
>>> experience it.
>>
>> The 1-self is not doubtable, OK. The 3-self is.
>
> Right, that's what I'm saying. Why not start with the undoubtable?
This is what I do for the explanation of consciousness and matter,
but, the undoubtable itself can be justify in arithmetic.
I am not saying that machines are correct. I limit myself to the study
of (arithmetically) correct machines, because it is all what we need
to explain matter and consciousness, but then the theory explains why
most machines cannot stay correct for long.
> It seems
> so obviously misguided to me. It's like something out of Being There.
> Don't you see that these machines aren't wise philosophers who all
> happen to agree, they are just Chauncey the gardener tapping into an
> oracle of pure emptiness...not the essence of psyche, but the essence
> of existential entropy.
They agree only on arithmetic. Even arithmetically correct machines
develop disagreement on many matters.
Let us hope it remains so.
Comp leads to theotechnologies. Biotechnology too.
That's too late. Nature, very plausibly has already made the step.
>>
>>>>> I say that
>>>>> substitution level does not apply. I think that to prove
>>>>> substitution
>>>>> level exists
>>
>>>> Comp implies that no one can prove it exists. No machine can know
>>>> for
>>>> sure its substitution level, even after a successful teleportation.
>>>> She might believe that she has 100% survived but suffer from
>>>> anosognosia.
>>
>>> I can understand what you are saying, and I agree that it is a good
>>> way of modeling why a self-referencing entity would not be able to
>>> get
>>> behind itself, but it seems like a contradiction. If you say that we
>>> are machines, then you are saying that we cannot know for sure our
>>> substitution level, which is exactly what you are criticizing me on.
>>
>> You confuse indeterminate level, and infinitely low level.
>
> If you can't determine it, how do you know it's not infinitely low?
I don't know.
Nobody can ever know that, if it is true. If it false, then we might
know that it is false.
>
>>
>>> If a machine cannot know for sure their subst level, does it know
>>> for
>>> sure that the level is not infinite?
>>
>> No. Comp justifies this precisely. Comp is a theological principle.
>> Comp ethics is that you have the right to say "no" to the doctor. We
>> cannot force comp to someone else.
>
> That sounds like a policy or best practices oath taken by comp
> practitioners rather than an actual consequence of comp.
No. It is a consequence of comp.
>
>>
>>> If not, then comp itself is not
>>> Turing emulable?
>>
>> Comp is not a person. But I get your idea. Comp, like numbers needs
>> to
>> be postulated, and cannot be derived from any theory not based on
>> some
>> act of faith. It *is* a theology. It is a scientific theology, and
>> this means only that it is doubtable. If some one say that comp is
>> the
>> truth, then, just according to comp: he is lying. It is a
>> possibility,
>> like any scientific theory. Parctical comp, if that appears, can only
>> be a private concern, between you, your doctor, and perhaps your
>> favorite shaman.
>>
> I don't understand how you can presume that a particular morality
> automatically comes with comp. It's like saying that electricity can
> only be used to benefit mankind or something. I think you're avoiding
> the question too. I was just pointing out that if substitution level
> is indeterminate, then why would that indeterminacy be Turing
> emulable?
?
Comp is the assumption that there is a level of substitution where we
are Turing emulable.
So by definition we are turing emulable at that level.
This is elementary computer science. I do have explained this already,
but it is ordinary math. Search with the term "diagonalization" in the
archive. I have also explained all this in the entheogen forum
recently. Read the thread "Simulated reality":
http://www.entheogen.com/forum/showthread.php?t=27553
(you can also read the UDA step 0, 1, 2, ... threads).
Or ask again if you are really interested, and I do it when I have
more time. I will be slowed down soon by my job, so I will soon
shortened my comments and delay my answers. Sorry for that.
I guess no, but the word "matter" is unclear, even in the physicalist
philosophy. Most physicists consider photon as being as much material
than electron, and electron in QED are not really separable from
photon. But this digresses from your explanation that comp is false.
Bruno
>> OK, so you agree that the *observable* behaviour of neurons can be
>> adequately explained in terms of a chain of physical events. The
>> neurons won't do anything that is apparently magical, right?
>
> Are not all of our observations observable behaviors of neurons?
> You're not understanding how I think observation works. There is no
> such thing as an observable behavior, it's always a matter of
> observable how, and by who? If you limit your observation of how
> neurons behave to what can be detected by a series of metal probes or
> microscopic antenna, then you are getting a radically limited view of
> what neurons are and what they do. You are asking a blind man what the
> Mona Lisa looks like by having him touch the paint, then making a
> careful impression of his fingers, and then announcing that the Mona
> Lisa can only do what fingerpainting can do, and that inferring
> anything beyond the nature of plain old paint to the Mona Lisa is
> magical. No. It doesn't work that way. A universe where nothing more
> than paint exists has no capacity to describe an intentional, higher
> level representation through a medium of paint. The dynamics of paint
> alone do not describe their important but largely irrelevant role to
> creating the image.
Observable behaviours of neurons include things such as ion gates
opening, neurotransmitter release at the synapse and action potential
propagation down the axon. I know there may also be non-observables,
but I'm only asking about the observables. Do you agree that if a
non-observable causes a change in an observable, that would be like
magic from the point of view of a scientist?
>> > We know that for example, gambling affects the physical behavior of
>> > the amygdala. What physical force do you posit that emanates from
>> > 'gambling' that penetrates the skull and blood brain barrier to
>> > mobilize those neurons?
>>
>> The skull has various holes in it (the foramen magnum, the orbits,
>> foramina for the cranial nerves) through which sense data from the
>> environment enters and, via a series of neural relays, reaches the
>> amygdala and other parts of the brain.
>
> What is 'sense data' made of and how does it get into 'gambling'?
Sense data could be the sight and sound of a poker machine, which gets
into the brain, is processed in a complex way, and is understood to be
"gambling".
> Not at all. The amygdala's response to gambling cannot be observed on
> an MRI. We can only infer such a cause because we a priori understand
> the experience of gambling. If we did not, of course we could not
> infer any kind of association with neural patterns of firing with
> something like 'winning a big pot in video poker'. That brain activity
> is not a chain reaction from some other part of the brain. The brain
> is actually responding to the sense that the mind is making of the
> outside world and how it relates to the self. It is not going to be
> predictable from whatever the amygala happens to be doing five seconds
> or five hours before the win.
The amygdala's response is visible on a fMRI, which is how we know
about it. We can infer this without knowing anything about either
gambling or the brain, noticing that input A (the poker machine) is
consistently followed by output B (the amygdala lighting up on fMRI).
>> You have not answered it. You have contradicted yourself by saying we
>> *don't* observe the brain doing things contrary to physics and we *do*
>> observe the brain doing things contrary to physics.
>
> We don't observe the Mona Lisa doing things contrary to the properties
> of paint, but we do observe the Mona Lisa as a higher order experience
> manifested through paint. It's the same thing. Physics doesn't explain
> the psyche, but psyche uses the physical brain in the ordinary
> physical ways that the brain can be used.
But the Mona Lisa does not move of its own accord. That is what it
would have to do for the situation to be analogous to brain changes
occurring due to mental processes and not physical processes.
>>You seem to
>> believe that neurons in the amygdala will fire spontaneously when the
>> subject thinks about gambling, which would be magic.
>
> You don't understand that you are arguing against neuroscience and
> common sense. Of course you can manually control your electrochemical
> circuits with thought. That's what all thinking is. It's not that the
> amygdala fires spontaneously, it's that the thrills and chills of
> risktaking *are* the firing of the amygdala. You seem to be saying
> that the brain has our entire life planned out for us in advance as
> some kind of meaningless encephalographic housekeeping exercise where
> we have no ability to make ourselves horny by thinking about sex or
> hungry by thinking about food, no capacity to do or say things based
> upon the realities outside of our skull rather than the inside.
I'm not sure if you're not understanding or just pretending not to
understand. Take any neuron in the brain: it fires due to the
influences of the surrounding neurons, and each of those neurons fires
due to the influence of the neurons surrounding it, and so on,
accounting for all the neurons in the brain. These are the third
person observable effects; associated with (or identical to, or
another aspect of, or supervening on, or a side-effect of - it doesn't
change the argument) this observable activity are the thoughts and
feelings. A scientist cannot see the thoughts and feelings, since they
are non-observable. The non-observable thoughts and feelings cannot
affect the observable physical activity, for if they could, the
scientist would see apparently magical events. We can still say that
thought A leads to feeling B, but what the scientist observes is that
brain state A' (associated with thought A) leads to brain state B'
(associated with feeling B). So although we can tell the story of the
person in terms of thoughts and feelings, the scientist can tell the
same story in terms of biochemical events. If the scientist
understands the biochemistry then in theory he will be able to predict
everything the person will do (or write probabilistic equations if
truly random effects are significant in the brain), although in
practice due to the complexity of the system this would be very
difficult.
>>Neurons only fire
>> in response to a physical stimulus.
>
> Absurd. Is there a physical difference between a letter written in
> Chinese and one written in English...some sort of magic neurochemical
> that wafts off of the Chinese ink that prevents my cortex from parsing
> the characters?
Of course there is! The Chinese characters reflect light in a
different pattern, which stimulates the retina differently, which
sends different signals to the visual cortex, which sends different
signals to the language centres. If knowledge of Chinese has been
stored in the language centre the subject understands it, otherwise he
does not.
>> That the physical stimulus has
>> associated qualia is not observable:
>> a scientist would see the neuron
>> firing, explain why it fired in physical terms, and then wonder as an
>> afterthought if the neuron "felt" anything while it was firing.
>
> Which is why that approach is doomed to failure. There is no point to
> the brain other than to help process qualia. Very little of the brain
> is required for a body to survive. Insects have brains, and they
> survive quite well.
That the scientist can't see the qualia is not his fault. As a
practical matter, knowledge of the mechanics of the brain can help in
restoring normal function when things go wrong, even without
understanding the qualia.
>> >> A neuron has a limited number of duties: to fire if it sees a certain
>> >> potential difference across its cell membrane or a certain
>> >> concentration of neurotransmitter.
>>
>> > That is a gross reductionist mispresentation of neurology. You are
>> > giving the brain less functionality than mold. Tell me, how does this
>> > conversation turn into cell membrane potentials or neurotransmitters?
>>
>> Clearly, it does, since this conversation occurs when the neurons in
>> our brains are active.
>
> My God. You are unbelievable. I give you a straightforward, unarguably
> obvious example of a phenomenon which obviously has absolutely nothing
> to do with cellular biology but is nonetheless controlling the
> behavior of neurological cells, and you answer that that it must be
> biological anyways. Your position, literally, is that 'I can't be
> wrong, because I already know that I am right.'
Particular brain activity is necessary and sufficient for this
conversation to occur. It is necessary because without this brain
activity, no conversation. It is sufficient because if this brain
activity occurs, the conversation occurs. These are mainstream
scientific beliefs which are not disputed, like the fact that the
heart pumps blood.
>>The important functionality of the neurons is
>> the action potential, since that triggers other neurons and ultimately
>> muscle. The complex cellular apparatus in the neuron is there to allow
>> this process to happen, as the complex cellular apparatus in the
>> thyroid is to enable secretion of thyroxine. An artificial thyroid
>> that measured TSH levels and secreted thyroxine accordingly could
>> replace the thyroid gland even though it was nothing like the original
>> organ in structure.
>
> But you have no idea what triggers the action potentials in the first
> place other than other action potentials. This makes us completely
> incapable of any kind of awareness of the outside world. You are
> mistaking the steering wheel for the driver.
The outside world gets in via the sense organs, which trigger action
potentials in nerves, which then trigger a series of action potentials
in the brain.
>> > So if I move my arm, that's because the neurons that have nothing to
>> > do with my arm must have caused the ones that do relate to my arm to
>> > fire? And 'I' think that I move 'my arm' because why exactly?
>>
>> The neurons are connected in a network. If I see something relating to
>> the economy that may lead me to move my arm to make an online bank
>> account transaction.
>
> What is 'I' and how does it physically create action potentials? The
> whole time you are telling me that only neurons can trigger other
> neurons, and now you want to invoke 'I'? Does I follow the laws of
> physics or is it magic? Which is it? Does 'I' do anything that cannot
> be explained by action potentials and cerebrospinal fluid? I expect
> I'm going to hear some metaphysical invocations of 'information' in
> the network.
"I" am the ensemble of neurons in the brain which when they are
functioning properly give rise to consciousness and a sense of
identity. "I" never do anything that can't be explained in terms of a
chain of neuronal events.
>> Obviously there has to be some causal connection
>> between my arm and the information about the economy. How do you
>> imagine that it happens?
>
> It happens because you make sense of the what you read about the
> economy and that sense motivates you to instantiate your own arm
> muscles to move your arm. The experience making sense of the economic
> news, as you said, *may* lead 'you' to move your arm - not *will
> cause* your arm to move, or your neurons to secrete acetylcholine by
> itself. It's a voluntary, high level, top-down participation through
> which you control your body and your life.
The making sense of what you read occurs due to certain neuronal
activity in the language centre of your brain. This may or may not
cause you to take a certain action, just as a coin may come up heads
or tails.
>> > If the brain of even a flea were anywhere remotely close to the
>> > simplistic goofiness that you describe, we should have figured out
>> > human consciousness completely 200 years ago.
>>
>> Even the brain of a flea is very complex. The brain of the nematode C
>> elegans is the simplest brain we know, and although we have the
>> anatomy of its neurons and their connections, no adequate computer
>> simulation exists because we do not know the strength of the
>> connections.
>
> Why is the strength of the connections so hard to figure out?
Because scientific research is difficult.
>> There is a certain level of tolerance in every physical object we
>> might want to simulate. We need to know a lot about it, but we don't
>> need accuracy down to the position of every atom, for if the brain
>> were so delicately balanced it would malfunction with the slightest
>> perturbation.
>
> A few micrograms of LSD or ricin can change a person's entire life or
> end it.
Yes, there are crucial parts of the system which don't tolerate
disruption. It's the same with any machine.
>> Whether something is conscious or not has nothing to do with whether
>> it is deterministic or predictable.
>
> What makes you think that's true? Do you have a counterfactual?
There is no reason to believe that determinism affects consciousness.
In general it is impossible to distinguish random from pseudorandom.
If the brain utilised true random processes and part of it were
replaced with a component that used a pseudorandom number generator
with a similar probability function to the true random one we would
notice no change in behaviour and the subject would notice no change
in consciousness (for if he did there would be a change in behaviour).
>> This statement shows that you haven't understood what a partial zombie
>> is. It is a conscious being which lacks consciousness in a particular
>> modality, such as visual perception or language processing, but does
>> not notice that anything is abnormal and presents no external evidence
>> that anything is abnormal. You have said a few posts back that you
>> think this is absurd: when you're conscious, you know you're
>> conscious.
>
> I can only use examples where the partial zombie is on the outside
> rather than the inside, since there is no way to have an example like
> that (you either can't tell if someone else is a zombie or you can't
> tell anything if you yourself are a partial zombie). I understand
> exactly what you are saying, I'm just illustrating that if you turn it
> around so that we can see the zombie side out but assume a non-zombie
> side inside, it's the same thing, and that it's no big deal.
A partial zombie occurs if only part of your brain is zombified.
Because this part of the brain (by definition) has the same observable
third person behaviour as it did before it was zombified, you would
lack the qualia of the replaced part while not noticing or behaving
differently. It is this which is absurd. The only way out of the
absurdity is to say that it is impossible to make a brain component
with the same observable third person behaviour that didn't also have
the same qualia. (Sorry for the clumsiness of "observable third person
behaviour" - I should just say "behaviour" but I think in the past you
have taken this to include consciousness).
>> The question is, why did humans evolve with consciousness rather than
>> as philosophical zombies? The answer is, because it isn't possible to
>> make a philosophical zombie since anything that behaves like a human
>> must be conscious as a side-effect.
>
> I understand that you are able to take that argument seriously, but it
> just jaw dropping to me that anyone could. Why does fire exist?
> Because it isn't possible to burn anything without starting a fire
> because anything that behaves like it's on fire must be burning as a
> side effect. It's just the most nakedly fallacious non-explanation I
> can imagine. It has zero explanatory power, and besides that, it's
> completely untrue. An actor's presence in a movie behaves like a human
> but the image on the screen is not 'conscious as a side-effect'. They
> are not even a little bit more conscious than a picture of a circle.
> Just, ugh.
Consciousness is a rather elaborate thing to evolve and elaborate
things like that don't evolve unless they strongly enhance survival
and reproductive success. If philosophical zombies were possible, they
would have the same survival and reproductive success as non-zombies.
But philosophical zombies did not evolve, suggesting to me that
consciousness is a necessary side-effect of any intelligent being.
>> It's not impossible, there is a qualitative difference between
>> difficult and impossible. It would be difficult for humans to build a
>> planet the size of Jupiter, but there is no theoretical reason why it
>> could not be done. On the other hand, it is impossible to build a
>> square triangle, since it presents a logical contradiction. There is
>> no logical contradiction in substituting the function of parts of the
>> human body. Substituting one thing for another to maintain function is
>> one of the main tasks to which human intelligence is applied.
>
> I understand what you are saying, and I would agree with you if the
> contents of the psyche were not so utterly different from the physical
> characteristics of the brain. We have no precedent for engineering
> such a thing. It dwarfs the idea of building Jupiter. If you say we
> can substitute lead for gold, I would say, well, sure, if you blast it
> down to protons and reassemble it atom by atom - or find an easier way
> to do it with a particle accelerator. But we have no common
> denominator of human consciousness to work from. A few micrograms off
> here or chromosomes off there, and you get major changes. I'm much
> more optimistic about replicating tissue, and augmenting the nervous
> system, but actually replacing it and expecting 'you' to still be in
> there is a completely different proposition.
We have already started engineering brain replacement: cochlear
implants, artificial hippocampus. These are crude but it's early days
yet.
>> You're saying that free will in a deterministic world is
>> contradictory. That may be the case if you define free will in a
>> particular way (and not everyone defines it that way), but still that
>> does not imply that the *feeling* of free will is incompatible with
>> determinism.
>
> I think that it is, because determinism assumes that everything that
> happens happens for a particular reason. What would the reason for
> such a feeling to exist, and how would it come into existence? Why
> would determinism care if something pretends that it is not
> determined, and how could it even ontologically conceived of non-
> determined?
The feeling of free will is simply due to the fact that I don't know
what I'm going to do until I do it. This is the case for computer
programs as well: the program can't know what the outcome of the
computation is until it actually runs, otherwise running it would be a
waste of time.
--
Stathis Papaioannou
>> Do you agree that if a
>> non-observable causes a change in an observable, that would be like
>> magic from the point of view of a scientist?
>
> Not at all. We observe 3-p changes caused by 1-p intentionality
> routinely. There is a study cited recently in that TV documentary
> where the regions of vegetative patients brains associated with
> coordinated movements light up an fMRI when being asked to imagine
> playing tennis. http://web.me.com/adrian.owen/site/Publications_files/Owen-2006-FutureNeurology.pdf
> p. 693-4
> Why do you want me to think that the ordinary relationship between the
> brain and the mind is magic? The 'non-observable cause' is the patient
> voluntarily imagining playing tennis. There is no other cause. They
> were given a choice between tennis and house, and the result of the
> fMRI was determined by nothing other than the patient's subjective
> choice. So will you stop accusing me of witchcraft about this now or
> is there going to be some other way of making me seem like I am the
> one rejecting science when it is your position which broadly
> reimagines the brain as some kind of closed-circuit Rube Goldberg
> apparatus?
The patient "voluntarily imagines playing tennis" if and only if
certain neural processes occur in the brain. If you believe thoughts
can arise in the absence of such neural processes or that thoughts by
themselves (i.e. not the associated neural process) can cause physical
changes in the brain such as neurons firing then you believe in
something like an immaterial soul which does our thinking for us. It's
not impossible that there is an immaterial soul but then the question
needs to be asked, why would we need a Rube Goldberg apparatus like a
brain at all when matter can be directly animated by spirit?
>> Sense data could be the sight and sound of a poker machine, which gets
>> into the brain, is processed in a complex way, and is understood to be
>> "gambling".
>
> By sight and sound do you mean acoustic waves and photons? Those
> things don't physically 'get into the brain', do they? You won't find
> 'sights and sounds' in the bloodstream. If you include them in a model
> of neurology, wouldn't you have to include the entire universe?
Light and sound are converted into electrical impulses that travel
down the optic and auditory nerves.
>> I'm not sure if you're not understanding or just pretending not to
>> understand. Take any neuron in the brain: it fires due to the
>> influences of the surrounding neurons,
>
> Noooo. Millions of neurons fire simultaneously in separate regions of
> the brain. Your assumptions about chain reactions being the only way
> that neurons fire is not correct. You owe the brain an apology.
> http://www.youtube.com/watch?v=VaQ66lDZ-08
>
> Please note: "Coherent SPONTANEOUS activity"
> http://jn.physiology.org/content/96/6/3517.full?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&author1=vincent&searchid=1&FIRSTINDEX=0&sortspec=relevance&resourcetype=HWCIT
>
>>and each of those neurons fires
>> due to the influence of the neurons surrounding it, and so on,
>> accounting for all the neurons in the brain.
>
> This is a fairy tale which I have not even heard anyone else claim
> before.
A neuron will fire or not fire due to its internal state and the
influence of its environment. It's internal state includes, for
example, the resting membrane potential, the intracellular
concentration of sodium, potassium and calcium ions, and the type and
number of receptor proteins in the membrane. The environment includes
which other neurons it interfaces with, the type and concentration of
neurotransmitters these neurons may be releasing, the temperature, pH
and ionic concentrations in the extracellular fluid, and so on. These
factors all go into determining whether the neuron will trigger or
not. The analysis applies to every neuron in the brain including the
spontaneously active ones. The same thing applies if there is one
neuron or a hundred billion neurons, although the large number of
neurons will result in much more complex behaviour.
>>These are the third
>> person observable effects; associated with (or identical to, or
>> another aspect of, or supervening on, or a side-effect of - it doesn't
>> change the argument) this observable activity are the thoughts and
>> feelings. A scientist cannot see the thoughts and feelings, since they
>> are non-observable.
>
> They are observable directly to the subject. A scientist can research
> her behavior of her own brain if she wants to.
>
>> The non-observable thoughts and feelings cannot
>> affect the observable physical activity,
>
> If they did not affect the observable physical world then I could not
> type to you my thoughts right now. You position is utterly invalid if
> genuine, and not even entertaining if trollery.
>
>> for if they could, the
>> scientist would see apparently magical events.
>
> Like voluntary movement of body parts and speech?
You're just not getting it. It's not that movement in the body is not
due to thought, but we can't see the thought, we can only see the
underlying physical events. So to a scientist, every movement in the
body can be attributed to a chain of physical events. If a thought can
cause a movement in the absence of a physical event, for example if
ligand-dependent ion channels open and trigger an action potential in
the absence of the ligand, that would be observed as magical, like a
table levitating. You seem to think that not only neurons but every
cell has the capacity to do this sort of thing; so why has no
scientist ever reported it?
>>We can still say that
>> thought A leads to feeling B, but what the scientist observes is that
>> brain state A' (associated with thought A) leads to brain state B'
>> (associated with feeling B). So although we can tell the story of the
>> person in terms of thoughts and feelings, the scientist can tell the
>> same story in terms of biochemical events. If the scientist
>> understands the biochemistry then in theory he will be able to predict
>> everything the person will do (or write probabilistic equations if
>> truly random effects are significant in the brain), although in
>> practice due to the complexity of the system this would be very
>> difficult.
>
> Some brain states do work that way, but some don't. Anyone that moves
> their little finger is going to move it in a neurologically similar
> way, but what people want to do for a career is not determinable in
> the same way. It depends on where they are born, how they are raised,
> what their opportunities are, etc. It's not something which can be
> regressed from brain state Q to some kind of precursor brain state G.
Where they were born, how they are raised, what the weather is like
all has a physical effect on the brain. If some factor has no impact
on the brain then that cannot possibly make a difference to the
person. This is not to say that the person's trajectory through life
can be predicted, but the weather cannot be predicted with certainty
either.
>> "I" am the ensemble of neurons in the brain which when they are
>> functioning properly give rise to consciousness and a sense of
>> identity. "I" never do anything that can't be explained in terms of a
>> chain of neuronal events.
>
> What makes you think that 'giving rise to consciousness and a sense of
> identity' can be explained in terms of a chain of neuronal events.
> It's just because you assume a priori that is what consciousness is.
Without trying to "explain" consciousness I know the circumstances
under which consciousness can be produced.
>> The making sense of what you read occurs due to certain neuronal
>> activity in the language centre of your brain. This may or may not
>> cause you to take a certain action, just as a coin may come up heads
>> or tails.
>
> Why is the making sense necessary at all? Why wouldn't the neuronal
> activity of reading just cause the neuronal activity of taking a
> certain action?
It does - and in so doing, understanding occurs. There are varying
degrees of understanding, ranging from blindly following a protocol to
analysing what you read in depth. If you analyse what you read in
depth the neural processing is more complicated and the resulting
decision more difficult to predict. In each case, the understanding
supervenes on the neural activity. Disembodied understanding does not
come forth from an immaterial soul to move your hand.
>> > A few micrograms of LSD or ricin can change a person's entire life or
>> > end it.
>>
>> Yes, there are crucial parts of the system which don't tolerate
>> disruption. It's the same with any machine.
>
> Are you assuming then that consciousness is not such a disruption
> intolerant part of the system?
Consciousness is affected by small amounts of specific chemicals but
not affected by quite gross physical changes such as the loss of
millions of neurons in the course of a day.
>> >> Whether something is conscious or not has nothing to do with whether
>> >> it is deterministic or predictable.
>>
>> > What makes you think that's true? Do you have a counterfactual?
>>
>> There is no reason to believe that determinism affects consciousness.
>> In general it is impossible to distinguish random from pseudorandom.
>> If the brain utilised true random processes and part of it were
>> replaced with a component that used a pseudorandom number generator
>> with a similar probability function to the true random one we would
>> notice no change in behaviour and the subject would notice no change
>> in consciousness (for if he did there would be a change in behaviour).
>
> So the answer is no, you do not have a counterfactual, and that there
> is nothing that makes you think that it's true other than it cannot be
> proven to be false by non-subjective means. Considering that the whole
> question is about subjectivity, to rule out subjective views may not
> be a scientific way to approach it. To me, it's pretty clear that one
> of the functions of consciousness is to make determinations, and
> therefore presents another another ontological option besides pre-
> determined, random, or pseudorandom. There is a such a thing as
> intentionality, the fact that is cannot be understood through physics
> and computation is not a compelling argument at all to me, it just
> reveals the limitations of our current models of physics.
You didn't understand my argument. I sometimes don't understand yours.
>> A partial zombie occurs if only part of your brain is zombified.
>> Because this part of the brain (by definition) has the same observable
>> third person behaviour as it did before it was zombified, you would
>> lack the qualia of the replaced part while not noticing or behaving
>> differently. It is this which is absurd.
>
> That contradicts your view that the behavior of the mind must all be
> physically observable in the brain. We know that qualia doesn't
> physically exist in the brain, so that makes it a zombie already.
Behaviour is observable, qualia are not. I know I'm not a zombie but
you might be. I also know I'm not a partial zombie.
>>The only way out of the
>> absurdity is to say that it is impossible to make a brain component
>> with the same observable third person behaviour that didn't also have
>> the same qualia. (Sorry for the clumsiness of "observable third person
>> behaviour" - I should just say "behaviour" but I think in the past you
>> have taken this to include consciousness).
>
> No, the way out of it is to see that qualia can be absent, distorted,
> or replaced in the brain. Blind people learn Braille and use the same
> area of the brain that sighted people use for vision, only for tactile
> qualia. Synesthesia also shows that qualia are not fixed to
> functionality, and conversion disorders illustrate absent qualia
> without neurological deficit.
>
> Even if none of those things were true, to say that this unexplainable
> experiential dimension we live in must just 'come with' particular
> mathematical objects because we can't imagine being able to make
> something that acts like us but doesn't live in the same dimension has
> all the earmarks of a terrible theory.
Again I don't think you understand what would happen if you replaced
part of your brain with a qualia-less component that had the same
third person observable behaviour. Perhaps you could tell me in your
own words if you do.
>> If philosophical zombies were possible, they
>> would have the same survival and reproductive success as non-zombies.
>> But philosophical zombies did not evolve, suggesting to me that
>> consciousness is a necessary side-effect of any intelligent being.
>
> Philosophical zombies did evolve. They are called sociopaths.
Zombies lack all qualia, not just a conscience.
>> We have already started engineering brain replacement: cochlear
>> implants, artificial hippocampus. These are crude but it's early days
>> yet.
>
> It wouldn't matter if they were perfect. Using artificial ear to hear
> with is not the same as becoming a computer program. People used to
> use a horn as a hearing aid. If I made a really fancy horn could I
> replace your brain with it?
If the ear can be replaced with impunity why not the auditory nerve,
and if the auditory nerve why not the auditory cortex?
>> The feeling of free will is simply due to the fact that I don't know
>> what I'm going to do until I do it.
>
> Why would there be a feeling associated with that? What purpose would
> it serve to know or not know that you don't know what you are going to
> do if you can't control whether or not you do it?
I do control what I do, but I don't know what decision I'm going to
make until I make it. If I did know what decision I was going to make
then I could change my mind - in which case, I would again be in a
position where I don't know what decision I'm going to make. Not
knowing what decision I'm going to make until I make it is consistent
with determinism.
--
Stathis Papaioannou
> On Sep 27, 9:20 am, Stathis Papaioannou <stath...@gmail.com> wrote:
> Noooo. Millions of neurons fire simultaneously in separate regions of
> the brain. Your assumptions about chain reactions being the only way
> that neurons fire is not correct. You owe the brain an apology.
Digital machines can emulate parallelism.
In all you answer to Stathis you elude the question by confusing
levels of explanation.
So either you postulate an infinitely low level (and thus infinities
in the brain), or you are introducing the magic mentioned by Stathis.
Bruno
In a sense I can follow you. If I feel in pain I can take a drug, and
in this case a high level psychological process can change a lower
level neuro process. But I am sure Stathis agree with this. That whole
cycle can still be driven by still lower computable laws. A universal
machine can emulate another self-transforming universal machine.
That's the point.
Bruno
I don't feel this very compelling.
You have to assume some primitive matter, and notion of localization.
This is the kind of strong metaphysical and aristotleian assumption
which I am not sure to see the need for, beyond extrapolating from our
direct experience.
You have to assume mind, and a form of panpsychism, which seems to me
as much problematic than what it is supposed to explain or at least
describe.
The link between both remains as unexplainable as before.
You attribute to me a metaphysical assumption, where I assume only
what is taught in high school to everyone, + the idea that at some
level matter (not primitive matter, but the matter we can observe when
we look at our bodies) obeys deterministic laws, where you make three
metaphysical assumptions: matter, mind and a link which refer to
notion that you don't succeed to define (like sensorimotive).
Then you derive from this that the third person "I" is not Turing
emulable, but this appears to be non justified too, even if we are
willing to accept some meaning in those nanosensorimotive actions
(which I am not, for I don't have a clue about what they can be).
Bruno
> The neural processes and the thoughts are different views of the same
> thing. In the case of voluntarily imagining something, it is the
> subjective content of the experiences being imagined which makes sense
> and the neurological processes are the shadow. There is no strictly
> neurological reason for their behavior, let alone one that evokes
> 'tennis'. If it were something involuntary, like a fever coming on,
> then the neurological processes would be the active sensemaking agent
> and the experience of getting sick would be the shadow. It's bi-
> directional. I know that you won't admit that that could ever be the
> case, but I don't understand why.
There *is* a strictly neurological reason for the 3-P observable
behaviour. If we limit ourselves to talking about that, do you agree?
>> If a thought can
>> cause a movement in the absence of a physical event, for example if
>> ligand-dependent ion channels open and trigger an action potential in
>> the absence of the ligand, that would be observed as magical, like a
>> table levitating.
>
> The thought *is* a physical event, it's just the subjective view of
> it. It's many physical events, each with a subjective view, but
> together, rather than forming a machine of objects related in space,
> the experiential side is experiences over time which are shared as a
> single, deeper, richer experience stream over time.
But you can't see the thought. Restrict discussion for now to the 3-P
observable behaviour of a neuron being investigated by a cell
biologist. From the scientist's point of view, the neuron only fires
in response to stimuli such as neurotransmitters at the synapse
(depending on what sort of neuron it is). Do you see that if the
thought makes the neuron do anything other than what the scientist
expects it to do from consideration of its physical properties and the
physical properties of the environment then it would be observed to be
behaving magically?
> You are not answering my question. Why does there need to be
> 'understanding' at all? You are saying that neurology causes something
> to occur: understanding. What do you mean by that. What is it? Magic?
> Metaphysics?
It's something which cannot be reduced to something simpler.
>> Again I don't think you understand what would happen if you replaced
>> part of your brain with a qualia-less component that had the same
>> third person observable behaviour. Perhaps you could tell me in your
>> own words if you do.
>
> What would really happen is that it could not have the same third
> person observable behavior. If someone is deaf, you cannot observe
> their lack of hearing by observing them, unless you intentionally try
> to test them. If you replace someones eyes with eyes which only see in
> the x-ray spectrum, then the visual cortex would pick it up in the
> familiar colors of the visible spectrum. If you replaced the visual
> cortex with something that processes optical stimulation in the eyes
> invisibly, then the patient would see nothing but would develop
> perceptual compensation from their other senses very rapidly compared
> with someone who went blind suddenly. They would have to learn to read
> their new optical capacity and it would not be visual, but it would
> enable them eventually to behave as a sighted person in most relevant
> ways.
The replacement part reproduces the 3-P behaviour of the biological
part. This means the rest of the brain also has the same 3-P
behaviour, since it is subjected to the same 3-P environmental
influences from the replacement part (that is what was reproduced,
even if the qualia were not). So the subject behaves as if he has
normal vision and hearing and believes that he has normal vision and
hearing.
You may object that the rest of the subject's brain does not behave
normally since it lacks the input from the qualia. But if the qualia
affect neurons directly, over and above what you would expect from the
qualia-less physical activity, that would mean that magical events are
observed.
--
Stathis Papaioannou
Jason
On Sep 29, 2011, at 6:43 PM, Craig Weinberg <whats...@gmail.com>
wrote:
> On Sep 29, 10:29 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>
>> I don't feel this very compelling.
>> You have to assume some primitive matter, and notion of localization.
>
> Why? I think you only have to assume the appearance of matter and
> localization, which we do already.
That would make my point, except it is not clear, especially with what
you said before.
Appearance to who, and to what kind of object?
You loss me completely.
>
>> This is the kind of strong metaphysical and aristotleian assumption
>> which I am not sure to see the need for, beyond extrapolating from
>> our
>> direct experience.
>
> Is it better to extrapolate only from indirect experience?
It is better to derive from clear assumptions.
>
>> You have to assume mind, and a form of panpsychism, which seems to me
>> as much problematic than what it is supposed to explain or at least
>> describe.
>
> It wouldn't be panpsychism exactly, any more than neurochemistry is
> panbrainism. The idea is that whatever sensorimotive experience taking
> place at these microcosmic levels
But now you have to define this, and explain where the microcosmos
illusion comes from, or your theory is circular.
> is nothing like what we, as a
> conscious collaboration of trillions of these things, can relate to.
> It's more like protopsychism.
... and where that protopsychism come from, and what is it.
Could you clearly separate your assumptions, and your reasoning (if
there is any). I just try to understand.
>
>> The link between both remains as unexplainable as before.
>
> Mind would be a sensorimotive structure.
A physical structure? A mathematical structure? A theological structure?
> The link between the
> sensorimotive and electromagnetic is the invariance between the two.
?
>
>>
>> You attribute to me a metaphysical assumption, where I assume only
>> what is taught in high school to everyone, + the idea that at some
>> level matter (not primitive matter, but the matter we can observe
>> when
>> we look at our bodies) obeys deterministic laws, where you make three
>> metaphysical assumptions: matter, mind and a link which refer to
>> notion that you don't succeed to define (like sensorimotive).
>>
>> Then you derive from this that the third person "I" is not Turing
>> emulable, but this appears to be non justified too, even if we are
>> willing to accept some meaning in those nanosensorimotive actions
>> (which I am not, for I don't have a clue about what they can be).
>
> The "I" is always first person.
I don't think so. When I say that my child is hungry, I refer to a 1-I
in the third person way. That's empathy.
And there is also a 3-I, which is the body, or its local description
handled by the "doctor". They correspond in the theory to an abstract
notion of Gödel number. It is our "code" (at the right level) in the
comp frame.
In fact there is as many notion of I than there are intensional
variants of self-reference. They all have a role in the shaping of
reality.
> The brain or body would be third
> person. What do you think of Super-Turing computation?
Which one?
Most are Turing emulated by the UD, and correspond to Turing's notion
of Oracle computable machine. It is an open problem if such form of TM
can exist physically, both in usual physics and in the comp physics.
Of course there might be notions of super-Turing machine being not
digitally emulable (even with oracle). You can use them to illustrate
your non-comp theory. That would make your theory far clearer indeed.
Bruno
On Sep 30, 2011, at 7:22 AM, Craig Weinberg <whats...@gmail.com>
wrote:
> On Sep 29, 11:14 pm, Jason Resch <jasonre...@gmail.com> wrote:
>> Craig, do the neurons violate the conservation of energy and
>> momentum? And if not, then how can they have any unexpected effects?
>>
>
> No. If you are wondering whether I think that anything that
> contradicts established observations of physics, chemistry, or biology
> is going on, the answer has always been no, and the fact that you are
> still asking means that you don't understand what I've said.
If it seems that I have misunderstood it is because I see a
contradiction. If a neuron opens it's ion channels because of a
thought, then thought is something we can see all the correlates of in
terms of third person observable particle collisions. If the ion
channel were to open without the observable and necessary particle
collisions then the neuron would be violating the conservation if
momentum.
Jason
>
>
> As long as you expect neurons to act like neurons - including to have
> self-directed thoughts and feelings at least in large groups, then
> they do not have any unexpected effects. If you watch a color TV
> program on a black and white TV, you aren't going to see any
> 'unexpected effects'. That's the point, what we expect from physics is
> not sufficient to explain all of the properties which we know first
> hand are present. You have to first look at it using a 'Color TV'.
>
> If you move your arm, does it violate the conservation of energy and
> momentum? No. Life is not a closed system. It exports entropy outside
> of itself so that it does what it needs to do and what it wants to do.
> You have to look at what is actually occurring in real life instead of
> starting from the black and white TV of physics and trying to shoehorn
> the technicolor, non-deterministic universe into it.
>
> Craig
>
I'm afraid the analogies you use don't help, at least for me. Does an
ion channel ever open in the absence of an observable cause? It's a
simple yes/no question. Whether consciousness is associated,
supervenient, linked, provided by God or whatever is a separate
question.
--
Stathis Papaioannou
I have no clue what you are taking about.
That your conclusion makes some arithmetical being looking like
impersonal zombie is just racism for me.
So I see a sort of racism against machine or numbers, justified by
unintelligible sentences.
>
>>
>>
>>
>>>> This is the kind of strong metaphysical and aristotleian assumption
>>>> which I am not sure to see the need for, beyond extrapolating from
>>>> our
>>>> direct experience.
>>
>>> Is it better to extrapolate only from indirect experience?
>>
>> It is better to derive from clear assumptions.
>
> Clear assumptions can be the most misleading kind.
But that is the goal. Celar assumption leads to clear misleading,
which can then be corrected with respect to facts, or repeatable
experiments.
Unclear assumptions lead to arbitrariness, racism, etc.
>
>>
>>
>>
>>>> You have to assume mind, and a form of panpsychism, which seems
>>>> to me
>>>> as much problematic than what it is supposed to explain or at least
>>>> describe.
>>
>>> It wouldn't be panpsychism exactly, any more than neurochemistry is
>>> panbrainism. The idea is that whatever sensorimotive experience
>>> taking
>>> place at these microcosmic levels
>>
>> But now you have to define this, and explain where the microcosmos
>> illusion comes from, or your theory is circular.
>
> I don't think there is a microcosmos illusion, unless you are talking
> about the current assumptions of the Standard Model as particles.
> That's not an illusion though, just a specialized interpretation that
> doesn't scale up to the macrocosm. As far as where sensorimotive
> phenomena comes from, it precedes causality. 'Comes from' is a
> sensorimotive proposition and not the other way around. The
> singularity functions inherently as supremacy of orientation, and
> sense and motive are energetic functions of the difference between it
> and it's existential annihilation through time and space.
That does not help.
>
>>
>>> is nothing like what we, as a
>>> conscious collaboration of trillions of these things, can relate to.
>>> It's more like protopsychism.
>>
>> ... and where that protopsychism come from, and what is it.
>> Could you clearly separate your assumptions, and your reasoning (if
>> there is any). I just try to understand.
>
> Specifically, like if you have any two atoms, something must have a
> sense of what is supposed to happen when they get close to each other.
> Iron atoms have a particular way of relating that's different from
> carbon atoms, and that relation can be quantified. That doesn't mean
> that the relation is nothing but a quantitative skeleton. There is an
> actual experience going on - an attraction, a repulsion, momentum,
> acceleration...various states of holding, releasing, or binding a
> 'charge'. What looks like a charge to us under a microscope is in fact
> a proto-feeling with an associated range of proto-motivations.
Why?
>
>>
>>
>>
>>>> The link between both remains as unexplainable as before.
>>
>>> Mind would be a sensorimotive structure.
>>
>> A physical structure? A mathematical structure? A theological
>> structure?
>
> No, a sensorimotive structure - which could encompass mathematical,
> theological, or physical styles. It's an experience that plays out
> over time and has participatory aspects. Some parts of the structure
> are quite literal and map to muscle movements and discrete neural
> pathways, and other ranges are lower frequency, broader, deeper, more
> continuous and poetic non-structure. It's a much wider band than that
> which is observable through physical instruments or computational
> devices, but physical and computational aspects of the cosmos have
> much precise and clear structures which exhaust our native ability to
> process with mind-numbing repetition and detail.
?
(I let you know that one of my main motivation consists in explaining
the physical, that is explaining it without using physical notions and
assumptions. The same for consciousness).
>
>>
>>> The link between the
>>> sensorimotive and electromagnetic is the invariance between the two.
>>
>> ?
> Feelings and action potentials have some phenomenological overlap.
What is feeling, what is action, what is potential?
> That's the link. They both map to the same changes at the same place
> and time, they just face opposite directions. Electromagnetism is
> public front end, sensorimotive is private back end, which for us can
> focus it's attention toward the front, back, or the link in between.
?
>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>>> You attribute to me a metaphysical assumption, where I assume only
>>>> what is taught in high school to everyone, + the idea that at some
>>>> level matter (not primitive matter, but the matter we can observe
>>>> when
>>>> we look at our bodies) obeys deterministic laws, where you make
>>>> three
>>>> metaphysical assumptions: matter, mind and a link which refer to
>>>> notion that you don't succeed to define (like sensorimotive).
>>
>>>> Then you derive from this that the third person "I" is not Turing
>>>> emulable, but this appears to be non justified too, even if we are
>>>> willing to accept some meaning in those nanosensorimotive actions
>>>> (which I am not, for I don't have a clue about what they can be).
>>
>>> The "I" is always first person.
>>
>> I don't think so. When I say that my child is hungry, I refer to a
>> 1-I
>> in the third person way. That's empathy.
>
> You still don't call your child 'I'. You're right that sensorimotive
> 1-
> p is sharable, as long as you are sufficiently isomorphic to the other
> entity.
That makes sense, at least by replacing "sensorimotive" by "subjective".
>
>> And there is also a 3-I, which is the body, or its local description
>> handled by the "doctor". They correspond in the theory to an abstract
>> notion of Gödel number. It is our "code" (at the right level) in the
>> comp frame.
>> In fact there is as many notion of I than there are intensional
>> variants of self-reference. They all have a role in the shaping of
>> reality.
>
> The subjective is a continuum from most subjective - imagination,
> interior monologue, etc to the ego, the body, clothes, possessions,
> language, home, memory, friends, work, interests, etc to the
> objective; partnerships, causes, philosophies, career, community,
> species, planet, etc. Yes, I agree they all have a role to play in the
> shaping of reality.
OK.
>
>>
>>> The brain or body would be third
>>> person. What do you think of Super-Turing computation?
>>
>> Which one?
>> Most are Turing emulated by the UD, and correspond to Turing's notion
>> of Oracle computable machine. It is an open problem if such form of
>> TM
>> can exist physically, both in usual physics and in the comp physics.
>> Of course there might be notions of super-Turing machine being not
>> digitally emulable (even with oracle). You can use them to illustrate
>> your non-comp theory. That would make your theory far clearer indeed.
>
> I was curious about Hava Siegelmann's theories about analog
> computation.
That's material phenomenon, and they can be used to perform some
computations, but with digital mechanism, they can be recovered in the
physical reality. They can't be primitive.
Bruno
On Sep 30, 10:16 am, Jason Resch <jasonre...@gmail.com> wrote:
> On Sep 30, 2011, at 7:22 AM, Craig Weinberg <whatsons...@gmail.com>
> wrote:It's not the particle collisions that cause an ion channel to open,
>
> > On Sep 29, 11:14 pm, Jason Resch <jasonre...@gmail.com> wrote:
> >> Craig, do the neurons violate the conservation of energy and
> >> momentum? And if not, then how can they have any unexpected effects?
>
> > No. If you are wondering whether I think that anything that
> > contradicts established observations of physics, chemistry, or biology
> > is going on, the answer has always been no, and the fact that you are
> > still asking means that you don't understand what I've said.
>
> If it seems that I have misunderstood it is because I see a
> contradiction. If a neuron opens it's ion channels because of a
> thought, then thought is something we can see all the correlates of in
> terms of third person observable particle collisions. If the ion
> channel were to open without the observable and necessary particle
> collisions then the neuron would be violating the conservation if
> momentum.
it's the neuron's sensitivity to specific electrochemical conditions
associated with neurotransmitter molecules,
and it's ability to
respond with a specific physical change. All of those changes are
accompanied by qualitative experiences on that microcosmic level. Our
thoughts do not cause the ion channels to directly open or close any
more than a screen writer causes the pixels of your TV to get brighter
or dimmer, you are talking about two entirely different scales of
perception. Think of our thoughts and feelings as the 'back end' of
the total physical 'front end' activity of the brain.
The back end
thoughts and feelings cannot be reduced to the front end activities of
neurons or ion channels, but they can be reduced to the back end
experiences of those neurons or ion channels - almost, except that
they synergize in a more significant way than front end phenomena can.
Think of it like a fractal vis if you want, where the large design is
always emerging from small designs, but imagine that the large design
and the small designs are both controlled by separate, but overlapping
intelligences so that sometimes the small forms change and propagate
to the larger picture and other times the largest picture changes and
all of the smaller images are consequently changed. Now imagine that
the entire fractal dynamic has an invisible, private backstage to it,
which has no fractal shapes developing and shifting every second, but
it has instead flavors and sounds that change at completely different
intervals of time than the front end fractal, so that the pulsating
rhythms of the fractal are represented on the back end as long
melodies and fragrant journeys.
Both the visual fractal and the olfactory musical follow some of the
same cues exactly and both of them diverge from each other completely
as well so that you cannot look at the fractal and find some graphic
mechanism that produces a song, and the existence of the song does not
mean that there is an invisible musicality pushing the pixels of the
fractal around, it's just that they are like the two ends of a bowtie;
one matter across space and the other experience through time. They
influence each other - sometimes intentionally, sometimes arbitrarily,
and sometimes in a conflicting or self defeating way.
Craig
>> I'm afraid the analogies you use don't help, at least for me. Does an
>> ion channel ever open in the absence of an observable cause? It's a
>> simple yes/no question. Whether consciousness is associated,
>> supervenient, linked, provided by God or whatever is a separate
>> question.
>
> Observable by who?
Observable by a third party.
It seems like a simple yes or no question to you
> because you aren't willing or able to see the whole phenomena. If I
> choose to think about something that makes me mad, I observe that I
> feel angry, and I observe that neurons fire, ion channels open, etc at
> the same time. The thoughts and anger they arouse are the observable
> cause, but they cannot be observed with a microscope or fMRI. They are
> observed by the person whose brain it is. This is the literal reality
> of what is going on. If I put my hand on a hot stove, neurons fire,
> ion channels open, and I feel burning pain through my skin. The cause
> there is the heat of the stove.
--
Stathis Papaioannou
On Oct 1, 10:13 am, Bruno Marchal <marc...@ulb.ac.be> wrote:On 01 Oct 2011, at 03:39, Craig Weinberg wrote:The singularity is all the matter that there is, was, and will be, butit has no exterior - no cracks made of space or time, it's allinteriority. It's feelings, images, experiences, expectations, dreams,etc, and whatever countless other forms might exist in the cosmos. Youcan use arithmetic to render an impersonation of feeling, as you canwrite a song that feels arithmetic - but not all songs feelarithmetic. You can write a poem about a color or you can write anequation about visible electromagnetism, but neither completelydescribe either color or electromagnetism.I have no clue what you are taking about.That your conclusion makes some arithmetical being looking likeimpersonal zombie is just racism for me.
I don't think that there are any arithmetical beings.
It's a fantasy,
or really more of a presumption mistaking an narrow category of
understanding with a cosmic primitive.
So I see a sort of racism against machine or numbers, justified byunintelligible sentences.
I know that's what you see. I think that it is the shadow of your own
overconfidence in the theoretical-mechanistic perspective that you
project onto me.
This is the kind of strong metaphysical and aristotleian assumptionwhich I am not sure to see the need for, beyond extrapolating fromourdirect experience.Is it better to extrapolate only from indirect experience?It is better to derive from clear assumptions.Clear assumptions can be the most misleading kind.But that is the goal. Celar assumption leads to clear misleading,which can then be corrected with respect to facts, or repeatableexperiments.Unclear assumptions lead to arbitrariness, racism, etc.
To me the goal is to reveal the truth,
regardless of the nature of the
assumptions which are required to get there. If you a priori prejudice
the cosmos against figurative, multivalent phenomenology then you just
confirm your own bias.
I don't think there is a microcosmos illusion, unless you are talkingabout the current assumptions of the Standard Model as particles.That's not an illusion though, just a specialized interpretation thatdoesn't scale up to the macrocosm. As far as where sensorimotivephenomena comes from, it precedes causality. 'Comes from' is asensorimotive proposition and not the other way around. Thesingularity functions inherently as supremacy of orientation, andsense and motive are energetic functions of the difference between itand it's existential annihilation through time and space.That does not help.
That doesn't help me either.
Specifically, like if you have any two atoms, something must have asense of what is supposed to happen when they get close to each other.Iron atoms have a particular way of relating that's different fromcarbon atoms, and that relation can be quantified. That doesn't meanthat the relation is nothing but a quantitative skeleton. There is anactual experience going on - an attraction, a repulsion, momentum,acceleration...various states of holding, releasing, or binding a'charge'. What looks like a charge to us under a microscope is in facta proto-feeling with an associated range of proto-motivations.Why?
Because that's what we are made of.
?(I let you know that one of my main motivation consists in explainingthe physical, that is explaining it without using physical notions andassumptions. The same for consciousness).
But what you are explaining it with is no more explainable than
physical notions or assumptions. Why explain what is real in terms
which are not real?
The link between thesensorimotive and electromagnetic is the invariance between the two.?Feelings and action potentials have some phenomenological overlap.What is feeling, what is action, what is potential?
To ask what feeling is can only be sophistry.
It is a primitive of
human subjectivity, and possibly universal subjectivity. To experience
directly, qualitatively, significantly. An action potential is an
electromagnetic spike train among neurons. They can be correlated to
instantiation of feelings.
That's the link. They both map to the same changes at the same placeand time, they just face opposite directions. Electromagnetism ispublic front end, sensorimotive is private back end, which for us canfocus it's attention toward the front, back, or the link in between.?
Electromagnetic and sensorimotive phenomena are opposite sides of the
same thing. I don't know how I could make it more clear.
Electromagnetism is public, generic, a-signifying, and sensorimotive
experience is private, proprietary and signifying.
I was curious about Hava Siegelmann's theories about analogcomputation.That's material phenomenon, and they can be used to perform somecomputations, but with digital mechanism, they can be recovered in thephysical reality. They can't be primitive.
What if material is primitive?
So you do believe that ion channels will open without an observable
cause, since thoughts are not an observable cause. A neuroscientist
would see neurons firing apparently for no reason, violating physical
laws.
--
Stathis Papaioannou
If they are part of the same thing, then it is presumptuous to say one causes the other.
One might at well say the neurons firing caused the thought of gambling - and in fact that
is what Stathis is saying and for the very good reason that a little electrical
stimulation, that has no "thought" or "sensorimotive" correlate, can cause both neurons
firing AND their correlated thoughts. But thoughts cannot cause the electrical stimulator
to fire. So it is *not* bidirectional.
Brent
The device cited picks up electrical impulses from the scalp. The
electrical activity comes from the neurons firing in the brain. These
neurons may have associated thoughts when they fire but this is not
obvious to an external observer: all that is obvious is that a
particular neuron fires because of various measurable factors such as
its resting membrane potential and the neurotransmitter released by
other neurons with which it interfaces. So to an external observer,
every neural event has an observable cause, generally other neural
events. This means the externally observable behaviour of the brain is
computable, even though the external observer may not know that the
brain is conscious. On the other hand, if the external observer does
not know about neurotransmitters and receptors he will not be able to
explain why the neurons fire - it will look to him as if they fire for
no reason. The mental is supervenient on the physical, but the mental
cannot as a separate entity move the physical. If it could, we would
observe neurons breaking physical laws.
--
Stathis Papaioannou
On Oct 2, 5:01 am, Bruno Marchal <marc...@ulb.ac.be> wrote:On 01 Oct 2011, at 21:05, Craig Weinberg wrote:On Oct 1, 10:13 am, Bruno Marchal <marc...@ulb.ac.be> wrote:On 01 Oct 2011, at 03:39, Craig Weinberg wrote:The singularity is all the matter that there is, was, and will be,butit has no exterior - no cracks made of space or time, it's allinteriority. It's feelings, images, experiences, expectations,dreams,etc, and whatever countless other forms might exist in the cosmos.Youcan use arithmetic to render an impersonation of feeling, as you canwrite a song that feels arithmetic - but not all songs feelarithmetic. You can write a poem about a color or you can write anequation about visible electromagnetism, but neither completelydescribe either color or electromagnetism.I have no clue what you are taking about.That your conclusion makes some arithmetical being looking likeimpersonal zombie is just racism for me.I don't think that there are any arithmetical beings.In which theory?
In reality.
It's a fantasy,or really more of a presumption mistaking an narrow category ofunderstanding with a cosmic primitive.You miss the incompleteness discoveries. To believe that arithmetic isnarrow just tell me something about you, not about arithmetic. Itmeans that you have a pregodelian conception of arithmetic. We knowtoday that arithmetic is beyond any conceivable effectiveaxiomatizations.
I don't disagree with arithmetic being exactly what you say it is,
only that it cannot be realized except through sensorimotive
experience. Without that actualization - to be computed neurologically
or digitally in semiconductors, analogously in beer bottles, etc, then
there is only the idea of the existence of arithmetic, which also is a
sensorimotive experience or nothing at all. There is no arithmetic
'out there', it's only inside of matter.
So yes, arithmetic extends to the inconceivable and nonaxiomatizable
but the sensorimotive gestalts underlying arithmetic are much more
inconceivable and nonaxiomatizable. A greater infinity.
So I see a sort of racism against machine or numbers, justified byunintelligible sentences.I know that's what you see. I think that it is the shadow of your ownoverconfidence in the theoretical-mechanistic perspective that youproject onto me.You are the one developing a philosophy making human with prostheticbrain less human, if not zombie.
I'm not against a prosthetic brain, I just think that it's going to
have to be made of some kind of cells that live and die, which may
mean that it has to be organic, which may mean that it has to be based
on nucleic acids.
Your theory would conclude that we should see
naturally evolved brains made out of a variety of materials not based
on living cells if we look long enough. I don't think that is
necessarily the case.
This is the kind of strong metaphysical and aristotleianassumptionwhich I am not sure to see the need for, beyond extrapolatingfromourdirect experience.Is it better to extrapolate only from indirect experience?It is better to derive from clear assumptions.Clear assumptions can be the most misleading kind.But that is the goal. Celar assumption leads to clear misleading,which can then be corrected with respect to facts, or repeatableexperiments.Unclear assumptions lead to arbitrariness, racism, etc.To me the goal is to reveal the truth,That is a personal goal. I don't think that truth can be revealed,only questioned.
How can you question it if it is not revealed?
regardless of the nature of theassumptions which are required to get there. If you a priori prejudicethe cosmos against figurative, multivalent phenomenology then you justconfirm your own bias.I don't hide this, and it is part of the scientific (modest) method. Iassume comp, and I derive consequences in that frame. Everyone is freeto use this for or against some world view.
It's a good method for so many things, but not everything, and I'm
only interested in solving everything.
I don't think there is a microcosmos illusion, unless you aretalkingabout the current assumptions of the Standard Model as particles.That's not an illusion though, just a specialized interpretationthatdoesn't scale up to the macrocosm. As far as where sensorimotivephenomena comes from, it precedes causality. 'Comes from' is asensorimotive proposition and not the other way around. Thesingularity functions inherently as supremacy of orientation, andsense and motive are energetic functions of the difference betweenitand it's existential annihilation through time and space.That does not help.That doesn't help me either.I mean: I don't understand. To much precise terms in a field where wequestion the meaning of even simpler terms.
I have precise terms because I have a precise understanding of what I
mean.
I'm saying that causality is an epiphenomena of a feeling of
succession, which is a specific category of the sensorimotive palette,
like pain or blue.
All of these feelings and experiences are generated
by the underlying dynamic of the singularity chasing it's tail through
the relatively fictional expansion of timespace.
Specifically, like if you have any two atoms, something must have asense of what is supposed to happen when they get close to eachother.Iron atoms have a particular way of relating that's different fromcarbon atoms, and that relation can be quantified. That doesn't meanthat the relation is nothing but a quantitative skeleton. There isanactual experience going on - an attraction, a repulsion, momentum,acceleration...various states of holding, releasing, or binding a'charge'. What looks like a charge to us under a microscope is infacta proto-feeling with an associated range of proto-motivations.Why?Because that's what we are made of.Why should I take your words for granted.
You don't have to. You should check it out for yourself and see if it
makes sense, and if not, why not?
?(I let you know that one of my main motivation consists in explainingthe physical, that is explaining it without using physical notionsandassumptions. The same for consciousness).But what you are explaining it with is no more explainable thanphysical notions or assumptions. Why explain what is real in termswhich are not real?You are just begging the question. You talk like if you knew what isreal or not.
I know that consciousness is real,
and my consciousness through my
body tells me that matter is real.
My consciousness also tells me that
some of it's own contents do not matter and it's perceptions do not
faithfully render what is real outside of my awareness. I would say
that arithmetic truths matter but they are not real, and therefore
cannot be manifested in a vacuum - only through some material object
which can accomodate the corresponding sensorimotive experiences. You
can't write a program that runs on a computer made of only liquid of
vapor - you need solid structures to accomodate fixed arithmetic
truths. You need the right kinds of matter to express arithmetic
truths, but matter does not need arithmetic to experience it's own
being.
Now it is the fact that all scientist agree with simple facts like1+9=10, etc. Actually they are using such facts already in theirtheories. I just show that IF we are machine, THEN those elementaryfacts are enough to explain the less elementary one.
But since we aren't only a machine, then it's a dead end.
It's
circular reasoning because you can say we can't prove we're not
machines,
but the whole idea of 'proving' is mechanical so you are
just magnifying the implicit prejudice and getting further from the
non-mechanistic truths of awareness.
The link between thesensorimotive and electromagnetic is the invariance between thetwo.?Feelings and action potentials have some phenomenological overlap.What is feeling, what is action, what is potential?To ask what feeling is can only be sophistry.Not when addressing issues in fundamental cognitive science. Niethermatter nor consciousness should be taken as simple elementary notions.
But numbers should be taken as elementary notions?
That's the problem,
you are trying to explain awareness as an epiphenomenon
of cognitive
science, when of course cognition arises from feeling (otherwise
babies would come out of the womb solving math equations instead of
crying, and civilizations should evolve binary codes before
ideographic alphabets and cave paintings).
It is a primitive ofhuman subjectivity, and possibly universal subjectivity. To experiencedirectly, qualitatively, significantly. An action potential is anelectromagnetic spike train among neurons. They can be correlated toinstantiation of feelings.I agree with all this, but that has to be explained, not as taken forgranted.
How can any primitive be explained?
If explanation is to reduce to
simpler known phenomena, and primitive is to be the simplest knowable
phenomena, then it's a contradiction to explain it any further. We can
only place it into a meaningful context, which I think my hypothesis
does.That's the link. They both map to the same changes at the same placeand time, they just face opposite directions. Electromagnetism ispublic front end, sensorimotive is private back end, which for uscanfocus it's attention toward the front, back, or the link in between.?Electromagnetic and sensorimotive phenomena are opposite sides of thesame thing. I don't know how I could make it more clear.That is your main problem.
Ok, but what isn't clear? Opposite? 'same thing'? Electromagentism?
Sensorimotive?
Electromagnetism is the name we give to the various phenomena of
matter across space - waving, attracting, repulsing, moving,
intensifying, discharging, radiating, accumulating density, surfaces,
depth, consistency, etc. Sensorimotivation is the name I'm giving to
the various phenomena of experience (energy) through time - detecting,
sensing, feeling, being, doing, intention, image, emotion, thought,
meaning, symbol, archetype, metaphor, semiotics, communication,
arithmetic, etc.
Electromagnetism is public, generic, a-signifying, and sensorimotiveexperience is private, proprietary and signifying.That is like saying, in the machine language that electromagnestism ifof type Bp, and sensori-motive is of type Bp & p, but I think thatelectromagnetism is of type Bp & Dt, and sensorimotive is of type ofBp & Dt & p.A part of your intuition might be accessible to computer, making yourdismissing the possibility of comp even more premature.
What's Dt?
I think I know what Bp and p are but maybe define them longhand so I
can be sure.
Hmm... The difference between subjective and sensorimotive would becaptured by the difference between Bp & p, and Bp & Dt & p. Thatconfirms my feeling described above.I'll get back to you if you can explain the variables better. I tried
Googling them but nothing clear comes up for me.
I was curious about Hava Siegelmann's theories about analogcomputation.That's material phenomenon, and they can be used to perform somecomputations, but with digital mechanism, they can be recovered inthephysical reality. They can't be primitive.What if material is primitive?Then comp is false. And you have to make this clear by assuming therelevant infinities.
What has to be infinite in order for comp to be false, and isn't comp
already assuming that arithmetic is non-axiomatizable and therefore
infinite?
We would also be led to the peculiar situationwhere machine could correctly prove that they are not machine,
I don't see how matter as a primitive makes machines able to prove
that they are not machines.
I think a machine machine (or something we
presume is a machine) proves whether of not it is a machine by how it
responds to errors or hardware failures.
You could maybe say that what
we are made of is an accumulation of the universe's favorite errors,
failures, and aberrations.
makingall possible discourses of machine being of the type Bf. You mighteventually change my mind on the non provability of comp (as opposedto the non recognizability of the our level of comp). For this youshould convince the machine that material is necessarily primitive. Ibegin to doubt that non-comp can make any sense. Hmm...
If I pull the plug on the machine, then the machine halts. Why should
that be the case were machine independent of material substrate?
I agree with Craig, although the way he presents it might seems a bit
uncomputationalist, (if I can say(*)).
Thoughts act on matter all the time. It is a selection of histories +
a sharing. Like when a sculptor isolates an art form from a rock, and
then send it in a museum. If mind did not act on matter, we would not
have been able to fly to the moon, and I am not sure even birds could
fly. It asks for relative works and time, and numerous deep
computations.
When you prepare coffee, mind acts on matter. When you drink coffee,
matter acts on mind. No problem here (with comp).
And we can learn to control computer at a distance, but there is no
reason to suppose that computers can't do that.
Bruno
(*) My computer put a read line under that word :)
Whether a neuron fires or not depends on its internal state and its
environment, especially the activity of the neurons with which it
interfaces. Whether the door opens depends on the key used, the mass
of the door, the friction in the hinges and the force applied to it.
Maybe the door has the experience of wanting to open if it opens or of
not wanting to open if it doesn't open, in which case we could say
that the door did what it wanted to do. This is perfectly consistent
with our observation of doors since we cannot observe the door qualia.
But the qualia will never move the door contrary to physics. As with
the door, you can say the neuron fired because it wanted to fire and
this could be perfectly consistent with the neuron firing due to the
multiple physical factors. It is the moving that causes the wanting;
if it were the other way around we would see doors opening and neurons
firing magically. I have stated this multiple times in different ways
and you deny that it would be magic, but when an unobservable
influence causes an observable effect that is magic by definition.
Note that I'm not even saying such magic is impossible, just that no
scientist has ever seen it, which is difficult to explain if it
happens all the time as you claim.
--
Stathis Papaioannou
> I agree with Craig, although the way he presents it might seems a bit
> uncomputationalist, (if I can say(*)).
>
> Thoughts act on matter all the time. It is a selection of histories + a
> sharing. Like when a sculptor isolates an art form from a rock, and then
> send it in a museum. If mind did not act on matter, we would not have been
> able to fly to the moon, and I am not sure even birds could fly. It asks for
> relative works and time, and numerous deep computations.
>
> When you prepare coffee, mind acts on matter. When you drink coffee, matter
> acts on mind. No problem here (with comp).
>
> And we can learn to control computer at a distance, but there is no reason
> to suppose that computers can't do that.
Mind acts on matter in a manner of speaking, but matter will not do
anything that cannot be explained in terms of the underlying physics.
An alien scientist could give a complete description of why humans
behave as they do and make a computational model that accurately
simulates human behaviour while remaining ignorant about human
consciousness. But the alien could not do this if he were ignorant
about protein chemistry, for example.
--
Stathis Papaioannou
> We do see neurons firing in response to no other stimulation other
> than the subjects conscious attention and intention. It's not magic,
> it's how it actually works. It's how you are making sense of these
> words right now. You can have your neurons move a mouse around, either
> through your hand or directly through one of those scalp rigs. It's
> only magic if you arbitrarily deny the subjects observation of their
> own subjective behavior.
The neurons are firing in my brain as I'm thinking, but if you could
go down to the microscopic level you would see that they are firing
due to the various physical factors that make neurons fire, eg. fluxes
of calcium and potassium caused by ion channels opening due to
neurotransmitter molecules binding to the receptors and changing their
conformation. If you take each neuron in the brain in turn at any
given time it will always be the case that it is doing what it is
doing due to these factors. You will never find a ligand-activated ion
channel opening in the absence of a ligand, for example. That would be
like a door opening in the absence of any force. Just because doors
and protein molecules are different sizes doesn't mean that one can do
magical things and the other not.
--
Stathis Papaioannou
>> The neurons are firing in my brain as I'm thinking, but if you could
>> go down to the microscopic level you would see that they are firing
>> due to the various physical factors that make neurons fire, eg. fluxes
>> of calcium and potassium caused by ion channels opening due to
>> neurotransmitter molecules binding to the receptors and changing their
>> conformation. If you take each neuron in the brain in turn at any
>> given time it will always be the case that it is doing what it is
>> doing due to these factors. You will never find a ligand-activated ion
>> channel opening in the absence of a ligand, for example. That would be
>> like a door opening in the absence of any force. Just because doors
>> and protein molecules are different sizes doesn't mean that one can do
>> magical things and the other not.
>
> You will also never find a ligand activated ion channel that is
> associated with a particular subjective experience fire in the absence
> of that subjective experience (that would be a zombie, right?), so why
> privilege the pixels of the thing as the determining factor when the
> overall image is just as much dictating which pixels are lit and how
> brightly? Again, every time you mention magic it just means that you
> don't understand my point. Every time you mention it, I am going to
> give you the same response. I understand your position completely, but
> you are just throwing dirt clods in the general direction of mine
> while closing your eyes.
The ion channel only opens when the ligand binds. The ligand only
binds if it is present in the synapse. It is only present in the
synapse when the presynaptic neuron fires. And so on. This whole
process is associated with an experience, but it is a completely
mechanical process. The equivalent is my example of the door: it opens
because someone turns the key and pushes it. If it had qualia it may
also be accurate to say that it opens because it wants to open, but
since we can't see the qualia they can't have a causal effect on the
door. If they could we would see the door opening by itself and we
would be amazed. It's the same with the neuron: if the associated
qualia had a causal effect on matter we would see neurons firing in
the absence of stimuli, which would be amazing.
Again, it's not that it's wrong to say that the neurons fired in the
amygdala because the person thought about gambling, it's that the
third person observable behaviour of the neurons can be entirely
explained and predicted without any reference to qualia. If the
neurons responded directly to qualia they would be observed to do
miraculous things and it may not be possible to predict or model their
behaviour.
--
Stathis Papaioannou
On Oct 4, 2:11 am, Stathis Papaioannou <stath...@gmail.com> wrote:It's the 'and so on' where your explanation breaks down. You are
>
> The ion channel only opens when the ligand binds. The ligand only
> binds if it is present in the synapse. It is only present in the
> synapse when the presynaptic neuron fires. And so on.
arbitrarily denying the top down, semantic, subjective participation
as a cause. There is no presynaptic neuron prior to the introduction
of the thought of gambling.
The thought is the firing of many neurons.
They are the same thing, except that the reason they are firing is
because of the subject choosing to realize a particular motivation (to
think about something or move a mouse, etc). There is no neurological
reason why those neurons would fire. They would not otherwise fire at
that particular time.
Starting a car initiates a mechanical process, and driving a car
>This whole
> process is associated with an experience, but it is a completely
> mechanical process.
executes a mechanical process, but without the driver choosing to
start the car and use the steering wheel and pedals to correspond with
their subjective perception and motivation, the car doesn't do
anything but idle. You cannot predict where a car is going to go based
on an auto mechanics examination of the car.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
> On Tue, Oct 4, 2011 at 4:09 AM, Bruno Marchal <mar...@ulb.ac.be>
> wrote:
>
>> I agree with Craig, although the way he presents it might seems a bit
>> uncomputationalist, (if I can say(*)).
>>
>> Thoughts act on matter all the time. It is a selection of histories
>> + a
>> sharing. Like when a sculptor isolates an art form from a rock, and
>> then
>> send it in a museum. If mind did not act on matter, we would not
>> have been
>> able to fly to the moon, and I am not sure even birds could fly. It
>> asks for
>> relative works and time, and numerous deep computations.
>>
>> When you prepare coffee, mind acts on matter. When you drink
>> coffee, matter
>> acts on mind. No problem here (with comp).
>>
>> And we can learn to control computer at a distance, but there is no
>> reason
>> to suppose that computers can't do that.
>
> Mind acts on matter in a manner of speaking, but matter will not do
> anything that cannot be explained in terms of the underlying physics.
Locally, you are right. But the physics itself arise from the
arithmetical computation structures on which consciousness supervene
on (to be short). So I am not sure if the expression of consciousness
duration for very short "emulation time" makes sense.
In fact, between any two sequential computational states *at some
level of description*, there exist an infinity of computational states
belonging to computations generated by the UD going through them *at
some more refined level, and this participates in the first person
experience generation (as in its material constitution).
> An alien scientist could give a complete description of why humans
> behave as they do and make a computational model that accurately
> simulates human behaviour while remaining ignorant about human
> consciousness. But the alien could not do this if he were ignorant
> about protein chemistry, for example.
OK.
Bruno
http://iridia.ulb.ac.be/~marchal/
This goes by the name "causal completeness"; the idea that the 3-p observable state at t
is sufficient to predict the state at t+dt. Craig wants add to this that there is
additional information which is not 3-p observable and which makes a difference, so that
the state at t+dt depends not just on the 3-p observables at t, but also on some
additional "sensorimotive" variables. If you assume these variables are not independent
of the 3-p observables, then this is just panpsychic version of consciousness supervening
on the 3-p states. They are redundant in the informational sense. If you assume they
are independent of the 3-p variables and yet make a difference in the time evolution of
the state then it means the predictions based on the 3-p observables will fail, i.e. the
laws of physics and chemistry will be violated.
Of course this violation maybe hard to detect in something very complicated like a brain;
but Craig's theory doesn't seem to assume the brain is special in that respect and even a
single electron supposedly has these extra, unobservable variables, i.e. a mind of its
own. The problem with electrons or other simple systems is that while we have complete
access to their 3-p variables, we don't have access to their hypothetical other variables;
the ones we call 1-p when referring to humans. So when all the silver atoms in a
Stern-Gerlach do just as we predict, it can be claimed that they all had the same 1-p
variables and that's why the 3-p variables were sufficient to predict their behavior.
So the only way I see to test this theory, even in principle, would be to observe Craig's
brain at a very low level while having him report his experiences (at least to himself)
and show that his experiences and his brain states were not one-to-one. Of course this is
probably impossible with current technology. Observing the brain at a coarse grained
level leaves open the possibility that one is just missing the 3-p variables that you show
the relationship to be one-to-one.
So I'd say that until someone thinks of an empirical test for this "soul theory",
discussing it is a waste of bandwidth.
Brent
Did I use the word "completely"?
> I've given several examples demonstrating how we routinely exercise
> voluntary control over parts of our minds, bodies, and environment
> while at the same time being involuntarily controlled by those same
> influences, often at the same time. This isn't a theory, this is the
> raw data set.
No it's not. In your examples of voluntary control you don't know what your brain is
doing. So you can't know whether you "voluntary" action was entirely caused by physical
precursors or whether their was some effect from libertarian free-will.
>
> If it were the case that the 3p and 1p were completely independent,
> then you would have ghosts jumping around into aluminum cans and
> walking around singing, and if they were completely dependent then
> there would be no point in being able to differentiate between
> voluntary and involuntary control of our mind, body, and environment.
Exactly the point of compatibilist free-will.
> Such an illusory distinction would not only be redundant but it would
> have no ontological basis to even be able to come into being or be
> conceivable. It would be like an elephant growing a TV set out of it's
> trunk to distract it from being an elephant.
Or pulling another meaningless example out of the nether regions.
>
> Since neither of those two cases is possible, I propose, as I have
> repeatedly proposed, that the 3p and 1p are in fact part of the same
> essential reality in which they overlap, but that they each extent in
> different topological directions;
What's a topological direction?
> specifically, 3p into matter, public
> space, electromagnetism, entropy, and relativity, and 1p into energy,
> private time, sensorimotive, significance, and perception.
"3p overlaps into entropy"!? Reads like gibberish to me.
>
> No laws of physics are broken by consciousness, but it is very
> confusing because our only example of consciousness is human
> consciousness, which is a multi-trillion cell awareness.
Exactly what I said. In fact one's only example of consciousness is their own. The
consciousness of other humans is an inference.
> The trick is
> to realize that you cannot directly correlate our experience of
> consciousness with the 3-p cellular phenomenology, but to only
> correlate it with the 3-p behavior of the brain as a whole.
That's the experimental question, and you don't know the answer.
> That's the
> starting point. If you are going to try to understand what a movie is
> about, you have to look at the whole images of the movie, and not
> focus on the pixels of the screen or the mechanics of pixel
> illumination to guide your interpretation. There is no human
> consciousness at that low level. There may be sensorimotive 1-p
> phenomenology there, and I think that there is, but we can't prove it
> now. What we can prove is there in 3-p would only relate to that low
> level 1-p which is unknown to us.
>
> My proposition is that our 1-p consciousness builds from lower level 1-
> p awareness and higher level 1-p semantic environmental influences,
> like cultural ideas, family traditions, etc.
But that is entirely untestable since we have no access to those 1-p consciousnesses.
Cultural ideas, family traditions are 3-p observables.
> It is not predictable
> from 3-p appearances alone, but not because it breaks the laws of
> physics. Physics has nothing to say about what particular patterns
> occur in the brain as a whole.
Sure it does - unless magic happens.
> There is no relevant biochemical
> difference between a one thought and another that could make it
> impossible physically,
So you say. But I think there is. If you think of an elephant there is something
biochemical happening that makes it not a thought about a giraffe. So when you read
"elephant" it is impossible to think of a giraffe at that moment.
> just as there is no sequence of illuminated
> pixels that is preferred by a TV screen, or electronics, or physics.
>
>> Of course this violation maybe hard to detect in something very complicated like a brain;
>> but Craig's theory doesn't seem to assume the brain is special in that respect and even a
>> single electron supposedly has these extra, unobservable variables, i.e. a mind of its
>> own.
> No. I have never said that a particle has a mind of it's own, I only
> say that it may have a sensorimotive quality which is primitive like
> charge or spin, but that this quality scales up in a different way
> than quantitative properties.
Scales up how? How is this sensormotive quality detected or measured? What's its
operational definition? How is it different from connective complexity of processes -
which is the quality that most people think gives a brain its special quality.
> The brain is very special *to us* and I
> suspect that it is pretty special relatively speaking as far as
> processes in the Cosmos. It's not special because it has awareness
> though, it's just the degree to which that awareness is elaborated and
> concentrated.
>
>> The problem with electrons or other simple systems is that while we have complete
>> access to their 3-p variables, we don't have access to their hypothetical other variables;
>> the ones we call 1-p when referring to humans. So when all the silver atoms in a
>> Stern-Gerlach do just as we predict, it can be claimed that they all had the same 1-p
>> variables and that's why the 3-p variables were sufficient to predict their behavior.
> Why is that a problem?
It's a problem because it makes your theory untestable for anything except a human brain.
>
>> So the only way I see to test this theory, even in principle, would be to observe Craig's
>> brain at a very low level while having him report his experiences (at least to himself)
>> and show that his experiences and his brain states were not one-to-one.
> No, I'm not saying that 1-p and 3-p are not synchronized, they are
> synchronized, but that doesn't mean that voluntary choices supervene
> on default neurological processes. Look at how our diaphragm works. We
> can voluntarily control our breathing to a certain extent, but there
> are involuntary default behaviors as well. This does not mean that we
> can't decide to hold our breath or that it can only be our body which
> is doing the deciding. How do you explain the appearance of voluntary
> control of our body?
I appears voluntary because we can't perceive the brain processes that produce the
action. So when the action comports with the brains usual pathways we feel "we did it
*voluntarily". Which is the point of David Eagleman's experiment with shifting a person's
time calibration. If he shifted it so that the result appeared earlier (in subjective
time) than the voluntary act then the person no long felt that they had done it. It
happened without them.
>
>> Of course this is
>> probably impossible with current technology. Observing the brain at a coarse grained
>> level leaves open the possibility that one is just missing the 3-p variables that you show
>> the relationship to be one-to-one.
>>
>> So I'd say that until someone thinks of an empirical test for this "soul theory",
>> discussing it is a waste of bandwidth.
> Way to argue from authority. "Your thoughts are a waste of everyone's
> time unless I think that they can be proved to my satisfaction".
I didn't say anything about which outcome would satisfy me. I said it's a waste of time
to argue a theory that cannot be tested.
Brent
> This goes by the name "causal completeness"; the idea that the 3-p
> observable state at t is sufficient to predict the state at t+dt. Craig
> wants add to this that there is additional information which is not 3-p
> observable and which makes a difference, so that the state at t+dt depends
> not just on the 3-p observables at t, but also on some additional
> "sensorimotive" variables. If you assume these variables are not
> independent of the 3-p observables, then this is just panpsychic version of
> consciousness supervening on the 3-p states. They are redundant in the
> informational sense. If you assume they are independent of the 3-p
> variables and yet make a difference in the time evolution of the state then
> it means the predictions based on the 3-p observables will fail, i.e. the
> laws of physics and chemistry will be violated.
>
> Of course this violation maybe hard to detect in something very complicated
> like a brain; but Craig's theory doesn't seem to assume the brain is special
> in that respect and even a single electron supposedly has these extra,
> unobservable variables, i.e. a mind of its own. The problem with electrons
> or other simple systems is that while we have complete access to their 3-p
> variables, we don't have access to their hypothetical other variables; the
> ones we call 1-p when referring to humans. So when all the silver atoms in
> a Stern-Gerlach do just as we predict, it can be claimed that they all had
> the same 1-p variables and that's why the 3-p variables were sufficient to
> predict their behavior.
That's a bit like saying there are fairies at the bottom of the garden
but they hide whenever we look for them. According to Craig, the 1-p
influence (which is equivalent to an immaterial soul) is ubiquitous in
living things, and possibly in other things as well. I think if no
scientist has ever seen evidence of this ubiquitous influence that is
good reason to say that it doesn't exist. In fact, Craig himself
denies that his theory would manifest as violation of physical law,
and is therefore inconsistent.
> So the only way I see to test this theory, even in principle, would be to
> observe Craig's brain at a very low level while having him report his
> experiences (at least to himself) and show that his experiences and his
> brain states were not one-to-one. Of course this is probably impossible
> with current technology. Observing the brain at a coarse grained level
> leaves open the possibility that one is just missing the 3-p variables that
> you show the relationship to be one-to-one.
>
> So I'd say that until someone thinks of an empirical test for this "soul
> theory", discussing it is a waste of bandwidth.
--
Stathis Papaioannou
I wrote "not independent" and "independent". Those are mutually exclusive in any logic I
know of. But "not independent" is not the same as "completely dependent". Try reading
what is written.
>
>>> I've given several examples demonstrating how we routinely exercise
>>> voluntary control over parts of our minds, bodies, and environment
>>> while at the same time being involuntarily controlled by those same
>>> influences, often at the same time. This isn't a theory, this is the
>>> raw data set.
>> No it's not. In your examples of voluntary control you don't know what your brain is
>> doing. So you can't know whether you "voluntary" action was entirely caused by physical
>> precursors or whether their was some effect from libertarian free-will.
> What difference does it make what your brain is doing to be able to
> say that you are voluntarily controlling the words that you type here?
>
>>
>>
>>> If it were the case that the 3p and 1p were completely independent,
>>> then you would have ghosts jumping around into aluminum cans and
>>> walking around singing, and if they were completely dependent then
>>> there would be no point in being able to differentiate between
>>> voluntary and involuntary control of our mind, body, and environment.
>> Exactly the point of compatibilist free-will.
> What does that label add to this conversation?
It makes the discussion precise; instead of wandering around analogies and metaphors.
>
>>> Such an illusory distinction would not only be redundant but it would
>>> have no ontological basis to even be able to come into being or be
>>> conceivable. It would be like an elephant growing a TV set out of it's
>>> trunk to distract it from being an elephant.
>> Or pulling another meaningless example out of the nether regions.
> Why meaningless? I'm pointing out that the illusion of free will in a
> deterministic universe would be not merely puzzling but fantastically
> absurd. Your criticism is arbitrary.
You're "pointing out" the very thing that is in dispute. Your assertion that is absurd is
not a substitute for saying how it could be tested and found false.
>
>>
>>
>>> Since neither of those two cases is possible, I propose, as I have
>>> repeatedly proposed, that the 3p and 1p are in fact part of the same
>>> essential reality in which they overlap, but that they each extent in
>>> different topological directions;
>> What's a topological direction?
> matter elaborates discretely across space, energy elaborates
> cumulatively through time.
A creative use of "elaborates"....does not parse.
>
>>> specifically, 3p into matter, public
>>> space, electromagnetism, entropy, and relativity, and 1p into energy,
>>> private time, sensorimotive, significance, and perception.
>> "3p overlaps into entropy"!? Reads like gibberish to me.
> 3-p doesn't overlap entropy, 3-p is entropic. 1-p is syntropic. The
> overlap is the 'here and now'. I'm not sure that it matters what I say
> though, you're mainly just auditing my responses for technicalities so
> that you can get a feeling of 'winning' a debate. It's a sensorimotive
> circuit. A feeling that you are seeking which requires a particular
> kind of experience to satisfy it. If I could offer you a drug instead
> that would stimulate the precise neural pathways involved in feeling
> that you had proved me wrong in an objective way, would that be
> satisfying to you? Would there be no difference in being right versus
> having your physical precursors to feeling right get tweaked? Isn't
> that what you are saying, that in fact this discussion is nothing but
> brain drugs with no free will determining our opinions? Isn't being
> right or wrong just a matter of biochemistry?
No, it's a matter of passing an empirical test.
>
>>
>>
>>> No laws of physics are broken by consciousness, but it is very
>>> confusing because our only example of consciousness is human
>>> consciousness, which is a multi-trillion cell awareness.
>> Exactly what I said. In fact one's only example of consciousness is their own. The
>> consciousness of other humans is an inference.
> I agree. Although I would qualify the inference. It's more of an
> educated inference. I'm making a different point with it though. I'm
> saying there is a problem with our default assumptions about micro
> brain mechanisms correlating with macro psychological experiences.
Fine. Think of a test that would prove the competing theory wrong.
>
>>> The trick is
>>> to realize that you cannot directly correlate our experience of
>>> consciousness with the 3-p cellular phenomenology, but to only
>>> correlate it with the 3-p behavior of the brain as a whole.
>> That's the experimental question, and you don't know the answer.
> I don't claim to have the answer, but I have a hypothesis, which has
> to be understood using this way of looking at the mind and brain.
>
>>> That's the
>>> starting point. If you are going to try to understand what a movie is
>>> about, you have to look at the whole images of the movie, and not
>>> focus on the pixels of the screen or the mechanics of pixel
>>> illumination to guide your interpretation. There is no human
>>> consciousness at that low level. There may be sensorimotive 1-p
>>> phenomenology there, and I think that there is, but we can't prove it
>>> now. What we can prove is there in 3-p would only relate to that low
>>> level 1-p which is unknown to us.
>>> My proposition is that our 1-p consciousness builds from lower level 1-
>>> p awareness and higher level 1-p semantic environmental influences,
>>> like cultural ideas, family traditions, etc.
>> But that is entirely untestable since we have no access to those 1-p consciousnesses.
>> Cultural ideas, family traditions are 3-p observables.
> We have access to our own 1-p consciousness. What else do we need?
We need to show that it is not entirely determined by the the physical evolution of the brain.
> Cultural ideas and family traditions are not 3-p observable - they
> have no melting point or specific gravity, they occupy no location -
> they must be inferred by 1-p interpretation/participation/consensus.
Everything is inferred from 1-p experiences. But cultural ideas and traditions are
public; they can be observed by more than one person and they can reach intersubjective
agreement just like any other facts about the world.
>
>>> It is not predictable
>>> from 3-p appearances alone, but not because it breaks the laws of
>>> physics. Physics has nothing to say about what particular patterns
>>> occur in the brain as a whole.
>> Sure it does - unless magic happens.
> Consciousness happens. Physics has nothing to say about what the
> content of any particular brain's thoughts should be. If give you a
> book about Marxism then you will have thoughts about Marxism - not
> about whatever physical modeling of a brain of your genetic makeup
> would suggest.
Do you think a book about Marxism is not physical and reading it is not a physical
process? What is your evidence for this. That's the whole question: Is thinking a purely
physical process or does it include some extra-physical part.
An opertaional definition is in terms of operations that will detect or measure something.
> Sensorimotive
> phenomena is a universal primitive. It is the capacity for
> participatory being - to detect and respond to changing interior and
> exterior conditions.
>
>> How is it different from connective complexity of processes -
>> which is the quality that most people think gives a brain its special quality.
> Without sensorimotive qualities, those processes cannot be experienced
> by anything. What knows the difference between simplicity and
> complexity if you have no awareness to distinguish it?
If you have no awareness then you don't know anything. It doesn't follow that everything
depends on your awareness of it.
what's the operational definition of "voluntary". Does it exclude "determined by physics"?
> It just means that our
> psyche is very complex and arriving at a consensus can only happen so
> fast. Measurements faster than that are going to look strange, just as
> freezing a movie mid frame is going to give you some strange artifacts
> and blurs that defy ordinary expectations of what a movie should look
> like.
>>
>>
>>>> Of course this is
>>>> probably impossible with current technology. Observing the brain at a coarse grained
>>>> level leaves open the possibility that one is just missing the 3-p variables that you show
>>>> the relationship to be one-to-one.
>>>> So I'd say that until someone thinks of an empirical test for this "soul theory",
>>>> discussing it is a waste of bandwidth.
>>> Way to argue from authority. "Your thoughts are a waste of everyone's
>>> time unless I think that they can be proved to my satisfaction".
>> I didn't say anything about which outcome would satisfy me. I said it's a waste of time
>> to argue a theory that cannot be tested.
> It can be tested, just maybe not with the technology we are using. You
> could build instruments which use living tissue to test these ideas.
> Replace someone's eye with a petri-dish retina that can serve as a
> laboratory for different types of cells to see if vision can be
> recreated out of other kinds of tissue, see if you get new colors,
> etc. There's all kinds of ways this theory could be tested,
How would you know if it perceived new colors? You couldn't ask it, and you have no
access to its qualia (if it has any).
Brent
Right.
> According to Craig, the 1-p
> influence (which is equivalent to an immaterial soul) is ubiquitous in
> living things, and possibly in other things as well.
But he doesn't say what effect is has. It could be anything and hence could explain any
experimental result.
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
I used to think that too, but I have
a better way of making sense of it now.
Craig
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
> On Oct 3, 11:16 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>
>>>>> I don't think that there are any arithmetical beings.
>>
>>>> In which theory?
>>
>>> In reality.
>>
>> That type of assertion is equivalent with "because God say so".
>> Reality is what we try to figure out.
>> If you know for sure what reality is, then I can do nothing, except
>> perhaps invite you to cultivate more the modest doubting attitude.
>
> Ok, let's say that I'm mathgnostic. I doubt the existence of
> arithmetic beings independent of matter.
I doubt the existence of matter being independent of arithmetic.
> I am sympathetic to
> numerological archetypes as coherent themes (or themes of coherence)
> which run through perception but to say that arithmetic spirits haunt
> empty space
Empty spaces haunt numbers dreams.
> doesn't orient me to anything true or real, it seems like
> pure fiction.
You reify spaces, so that arithmetical beings looks magic. But
arithmetical truth is out of any physical category.
A number is just not the type of entity having a location, although it
can manifest itself through locally physical realities.
> If it were the case then I would expect five milk
> bottles in a group to have the same basic function as five protons in
> a nucleus,
I don't see the logic here.
> five boron atoms in a molecule, five cells in a dish, etc.
> I just don't see any examples of causally efficacious arithmetic as an
> independent agent.
?
"Inside arithmetic" was a shorthand for "as determined through
arithmetical relation, or as observable by persons determined by
arithmetical relations (in a theoretical computer science sense).
> whereas I know for a fact that I am inside my body.
You are not. You are an immaterial being, and you have no more
location than a number, or a space. But I can explain in details why
the illusion of having a location can be very strong when person get
entangled to deep histories.
> What form of a non-comp theory are you asking for? I will try to
> comply.
Just tell us what you are assuming as primitive, and what you derive
from that.
The best form would be a first order logical axiomatization, because
those are provably independent of any "metaphysical baggage", to coin
an expression by Brian Tenneson, which sum well the importance of such
type of theory. But I know you try to avoid technical literature.
>
>>
>>
>>> So yes, arithmetic extends to the inconceivable and nonaxiomatizable
>>> but the sensorimotive gestalts underlying arithmetic are much more
>>> inconceivable and nonaxiomatizable. A greater infinity.
>>
>> Inside arithmetic *is* a bigger infinity than arithmetic. It is not
>> even nameable.
>
> If it's inside of arithmetic, how can it be bigger than itself?
Good question. It is not easy to answer it without being much more
technical. Let me just say that this is a question of internal
perspective. It is related to a phenomenon discovered by Skolem, and
which relativize the notion of cardinalities (used to measure the size
of mathematical object, and which often measure the size of the
intrinsic ignorance of the entities living in those objects).
I should stick on "from inside, arithmetic will be perceived as bigger
than from outside".
>
>>
>>
>>
>>>>>> So I see a sort of racism against machine or numbers, justified
>>>>>> by
>>>>>> unintelligible sentences.
>>
>>>>> I know that's what you see. I think that it is the shadow of your
>>>>> own
>>>>> overconfidence in the theoretical-mechanistic perspective that you
>>>>> project onto me.
>>
>>>> You are the one developing a philosophy making human with
>>>> prosthetic
>>>> brain less human, if not zombie.
>>
>>> I'm not against a prosthetic brain, I just think that it's going to
>>> have to be made of some kind of cells that live and die, which may
>>> mean that it has to be organic, which may mean that it has to be
>>> based
>>> on nucleic acids.
>>
>> Replace in the quote just above "prothetic brain" by " silicon
>> prosthetic brain".
>
> I think that if we understand that the brain itself is what is feeling
> and thinking,
You contradict what you told me in another post. You said you agree
that it is not the brain which feels or thinks, but a person.
A brain feels nothing, indeed, for obvious reason, it is even the only
organ without sensorial nerves.
I am very open to the idea that individual neurons can have a proper
inner life, like amoeba and plants, but those are not related to our
inner life. Consciousness is not a sort of sum of the consciousness of
the part of a body, if only because consciousness is not something
material at all. It has no mass, no energy, no velocity, no space-
location, nor even any time location (and I agree this might seems
counter-intuitive).
That is why it is far simpler to explain consciousness from the
number, and then physicalness from sharable coherent deep dreams, than
the inverse.
> rather than some disembodied computational function,
> then we have to consider that the material may not be substitutable,
> or if it is, the probability of successful substitution would be
> directly proportional to the isomorphism of the biology. If we knew
> of a particular computation which did cause life and consciousness to
> arise in inanimate objects, then that would be convincing, but thus
> far, we have not seen any suggestion of a computer program plotting
> against it's programmer or express an unwillingness to be halted.
Life is explained by the second recursion theorem of Kleene. See my
paper on amoeba and planaria.
Consciousness is 99% explained by the fact that machine can prove the
second recursion theorem of Kleene, or by Gödel's diagonalization
lemma. Then we can fully meta-explain why 1% of consciousness is not
explainable by any entity (except perhaps one, which in this case must
remain silent).
The theorem of Kleene assures that for any computable transformation
T, you can find a program applying T on its own 3-I (that is, its body
for any chosen level of description). You get the simple self-
reproductive amoeba by using T = identity.
Programs does not plot against us, because they are too young, and we
control them still rather well.
>
>>
>>> Your theory would conclude that we should see
>>> naturally evolved brains made out of a variety of materials not
>>> based
>>> on living cells if we look long enough. I don't think that is
>>> necessarily the case.
>>
>> The theory says that it is *possibly* the case, and the advent of
>> computers show it to be the case right now. The difference between
>> artificial and natural is ... artificial.
>>
>
> But why, if biology has nothing to do with life, and neurology has
> nothing to do with consciousness,
Biology is the study of life. I guess you mean that I meant that the
fundamental principle of biology have nothing to do with carbon, but I
am not sure about this. It might be that the structure of carbon and
its role in the origin of life might be deduced from arithmetic. And
this does no mean, that once life as appeared, with carbon, it might
be abandoned later.
> do we find no non-biological entity
> having evolved to live or demonstrate human consciousness.
All life forms today depend completely of oxygen, and plants. That was
not true at the beginning. Life tend to proliferate and can easily
adapt the planet conditions preventing too much different life forms
to develop. many different explanations are possible.
Also, you beg the question. Today's machines are evolving more quickly
than any life form. And I have already argue that todays' LUMs are as
conscious as you, me and jumping spider.
> Doesn't
> that seem unlikely to you. I understand your point that comp promises
> to deliver computers which could be considered as conscious as we are,
> but I think that's only because science is hopelessly confused about
> what consciousness is.
>
> I agree that there is no literal difference between natural and
> artificial, but it's still a glaring deficiency of comp in my mind
> that in the history of the Earth there just so happens to not be any
> non-organic life at all.
The quantum theory of atoms explains well why carbon, hydrogen,
oxygen, nitrogen, have some special role.
> Especially if computers, as you seem to
> suggest, can adopt consciousness just by functioning in the same
> manner as something conscious, then it seems by now there would be
> some cave somewhere where the limestone had learned to dance like a
> beetle or bloom like a flower.
Chemical life has many feature making it rather rare. Very natural,
but very rare. Now, once it happens, it can take much different forms.
>
>>
>>
>>>>>>>>>> This is the kind of strong metaphysical and aristotleian
>>>>>>>>>> assumption
>>>>>>>>>> which I am not sure to see the need for, beyond extrapolating
>>>>>>>>>> from
>>>>>>>>>> our
>>>>>>>>>> direct experience.
>>
>>>>>>>>> Is it better to extrapolate only from indirect experience?
>>
>>>>>>>> It is better to derive from clear assumptions.
>>
>>>>>>> Clear assumptions can be the most misleading kind.
>>
>>>>>> But that is the goal. Celar assumption leads to clear misleading,
>>>>>> which can then be corrected with respect to facts, or repeatable
>>>>>> experiments.
>>>>>> Unclear assumptions lead to arbitrariness, racism, etc.
>>
>>>>> To me the goal is to reveal the truth,
>>
>>>> That is a personal goal. I don't think that truth can be revealed,
>>>> only questioned.
>>
>>> How can you question it if it is not revealed?
>>
>> It can be suggested, like in dreams.
>
> So it is better to extrapolate from what our dreams suggest than the
> 'unclear assumptions' of our ordinary, direct, shared, conscious
> experience?
Not at all. But we can deduce facts from the fact that we are dreaming.
And we don't have any *direct* shared conscious experience. That's a
theorem in comp (and an evidence for grow up adult, I thought). There
is always some amount of indirection.
>
>>
>>
>>
>>>>> regardless of the nature of the
>>>>> assumptions which are required to get there. If you a priori
>>>>> prejudice
>>>>> the cosmos against figurative, multivalent phenomenology then you
>>>>> just
>>>>> confirm your own bias.
>>
>>>> I don't hide this, and it is part of the scientific (modest)
>>>> method. I
>>>> assume comp, and I derive consequences in that frame. Everyone is
>>>> free
>>>> to use this for or against some world view.
>>
>>> It's a good method for so many things, but not everything, and I'm
>>> only interested in solving everything.
>>
>> You might end up with a theory of everything that you will not been
>> able to communicate. You might have fans and disciples (and even
>> money) but not students and researchers correcting and extending your
>> work.
>>
>
> I can't do anything about that. If the world is not interested in the
> truth, then I can't change it.
Truth or hypothesis?
Do you want to do science (hypothesis), or not?
I take for granted elementary school boy arithmetic, and then I can
take the time to explain how a computer work.
> especially based on anti-physicalist conceptions of simulation as the
> only reality.
Quite the contrary. That is the result, derived from elementary
arithmetic, and the hypothesis that biology relies on deterministic
laws. I am not pretending it is obvious (other people have pretended
that for years, and it took me time to understand that it is not *so*
simple after all.
That is nicely said, but I don't buy it. You assume what my intuition
asks me to explain.
To say that arithmetic is too narrow is a symptom that you don't know
what you are talking about.
Analysis, physics, are deluding narrowings of arithmetic.
When I want to be provocative, I say that the physical universe is a
failed attempt made by God to understand the numbers.
>
>>
>>
>>
>>>>>> ?
>>>>>> (I let you know that one of my main motivation consists in
>>>>>> explaining
>>>>>> the physical, that is explaining it without using physical
>>>>>> notions
>>>>>> and
>>>>>> assumptions. The same for consciousness).
>>
>>>>> But what you are explaining it with is no more explainable than
>>>>> physical notions or assumptions. Why explain what is real in terms
>>>>> which are not real?
>>
>>>> You are just begging the question. You talk like if you knew what
>>>> is
>>>> real or not.
>>
>>> I know that consciousness is real,
>>
>> Good. My oldest opponents were disagreeing on this point (a critics
>> which does not make much sense).
>
> Heh, yeah, I can maybe see quibbling with the wording of the cogito,
> but the spirit of it seems silly to deny.
The cogito is important, and I agree with Slezak, that the Gödelian
sentences illustrates the (machine's) cogito:
SLEZAK P., 1983, Descartes 's Diagonal Deduction, Brit. J. Phil. Sci.
34, pp. 13-36.
>
>>
>>> and my consciousness through my
>>> body tells me that matter is real.
>>
>> Matter is real. I do agree with this. But matter, assuming comp, is
>> not something made of elementary material things. Matter, to be
>> short,
>> is the border of the universal mind, as seen by the universal mind.
>> It
>> is a real perception of something which is not primarily material,
>> but
>> sum up infinities of computations. An instructive image, is the
>> border
>> of the Mandelbrot set.
>
> I do understand what you mean, and I almost agree, again, except that
> the Mandelbrot set is too literal.
I have conjectured that the rational Mandelbrot set is a creative set
in the sense of Post, itself equivalent with the notion of universal
machine. In that case the Mandelbrot set is a compactification of a
universal dovetailer. It is picture of "home".
In that case the analogy is literal.
> It doesn't look like a mind, it
> looks like a leaf or a feather.
Some parts looks like a projection of a cut of the four dimensional
tree of life.
> Obsessive, repetitive self-
> similarity... definitely part of it, but you need the orientation of
> naive sensation and motive to make sense of it. It's the elephant in
> the Mandelbrot.
We are back on arithmetical realism. The structure of the M set is
independent of us. It occurs naturally in almost all iteration of
functions on complex numbers.
>
>>
>>> My consciousness also tells me that
>>> some of it's own contents do not matter and it's perceptions do not
>>> faithfully render what is real outside of my awareness. I would say
>>> that arithmetic truths matter but they are not real, and therefore
>>> cannot be manifested in a vacuum - only through some material object
>>> which can accomodate the corresponding sensorimotive experiences.
>>> You
>>> can't write a program that runs on a computer made of only liquid of
>>> vapor - you need solid structures to accomodate fixed arithmetic
>>> truths. You need the right kinds of matter to express arithmetic
>>> truths, but matter does not need arithmetic to experience it's own
>>> being.
>>
>> Not necessarily. You have to give an argument, and there are many
>> results which can explain to you how such argument have to be very
>> sophisticated. Apparently, in arithmetic, numbers does dream
>> coherently (in a first person sharable way) of a stable quantum
>> reality, with some symmetries at the bottom, and wavy like
>> interferences.
>>
>
> I think what you are saying is that matter can arise from arithmetic,
> which is possible, but I don't see the difference. Why is arithmetic
> easier to explain than matter?
Only the beginning of arithmetic is simpler to explain than matter.
To explain why photon have mass, you need the non trivial arithmetical
facts that the sum of all natural numbers can give -1/12.
Matter is hard to explain, even without addressing the hard problem of
matter.
> I think that my hypothesis rooted in
> 'sense' (as the relation between matter-space-entropy and energy-time-
> significance)
I see words here, and no explanation.
> is an audaciously Promethean notion which grounds our
> perception in a cosmos which is both authentic and participatory, as
> well as transcendent and forgiving. From comp I get nothing surprising
> beyond the initial appreciation of the depth of possibilities of
> arithmetic, which although impressive, strike me as being merely awe
> inspiring with no hint at the gravity of the experience of organic
> life.
Why not? The beginning of arithmetic is simple, but when you grasp
that arithmetic is full of life, and that the arithmetical platonia is
infinitely messy, you might as well understand that comp might be
possible, and this without abandoning any of your intuition, except
your quite frightening intuition that my sun in law cannot appreciate
a good steak.
>
>>
>>
>>>> Now it is the fact that all scientist agree with simple facts like
>>>> 1+9=10, etc. Actually they are using such facts already in their
>>>> theories. I just show that IF we are machine, THEN those elementary
>>>> facts are enough to explain the less elementary one.
>>
>>> But since we aren't only a machine, then it's a dead end.
>>
>> You should say: "but since in my theory I am assuming that we are
>> not
>> machine, it is a dead end in my theory".
>
> Yes. Not trying to be rude, I just assume that everything I say is
> automatically within the disclaimer of 'in my view'.
Then you have to repeat it, and avoid the "truth" label.
>>
>>> It's
>>> circular reasoning because you can say we can't prove we're not
>>> machines,
>>
>> I say the exact opposite. We can prove that we are not machine (in
>> case we are not machine). If we are (consistent) machine, then we
>> cannot prove it.
>
> So how do we prove that we are not machine?
For example by showing that comp entails that the mass of an electron
is, after all renormlization are completed, bigger than one ton, to
give an example.
> Why can't we be both
> machine and not machine?
Comp gives sense that the 3-I is a machine, and the 1-I is not. But if
you mean literally that we are machine and non machine, then this is
just a contradictory statement.
>
>>
>>> but the whole idea of 'proving' is mechanical so you are
>>> just magnifying the implicit prejudice and getting further from the
>>> non-mechanistic truths of awareness.
>>
>> The human activity of proving is not mechanical(*), but a gentle
>> polite proof should be mechanically checkable. You can't say to the
>> peer reviewers that for the proposition 13 they have to pray God or
>> smoke salvia divinorum. (Or you say it only at the pause cafe, and
>> this is for private concerns, not for the publication, unless it is
>> paper on salvia or God, but then the goal is no more to prove but to
>> suggest a possible empirical discovery).
>>
>> (*) assuming P ≠ NP.
>
> If peer reviewers demand that a theory which explains subjectivity not
> examine subjectivity directly, then they have a priori excluded any
> possibility of understanding subjectivity. The peer reviewers are the
> problem, not the theory.
Sure. But they do not demand that subjectivity is not examined
directly, they demand that *arguments* are not based on non
communicable statements, beyond the axioms.
>
>
>>
>>
>>
>>>>>>>>> The link between the
>>>>>>>>> sensorimotive and electromagnetic is the invariance between
>>>>>>>>> the
>>>>>>>>> two.
>>
>>>>>>>> ?
>>>>>>> Feelings and action potentials have some phenomenological
>>>>>>> overlap.
>>
>>>>>> What is feeling, what is action, what is potential?
>>
>>>>> To ask what feeling is can only be sophistry.
>>
>>>> Not when addressing issues in fundamental cognitive science.
>>>> Niether
>>>> matter nor consciousness should be taken as simple elementary
>>>> notions.
>>
>>> But numbers should be taken as elementary notions?
>>
>> In the usual mathematical sense. No need of extra metaphysical
>> assumption. You just need to believe sentences like "prime numbers
>> exists".
>
> They exist in the context of a particular sensorimotive logos, not in
> any independent sense. Something like the visible spectrum is a much
> stronger primitive as it appears to us unbidden and unexplained as a
> shared experience without having to be learned or understood.
But if this is an argument, then you take what we search to explain as
the direct assumption. you could as well say that we should cultivate
our gardens instead of doing fundamental research.
>
>> All the material science use this. Despite the claims of some
>> philosophers, we just cannot do science without assuming the
>> independence of the truth of elementary (first order) arithmetical
>> relations.
>
> They can have truth or refer to truth without themselves being
> phenomena which exist independently.
Not assuming comp. That is the whole point of UDA.
> They aren't a they even, it's
> just an ephemeral collection of human ideas about quantitative
> universality. I don't see that they describe quality or techne at all.
They provides the best we can hope for in case we do survive with
digital brain.
>
>>
>>> That's the problem,
>>> you are trying to explain awareness as an epiphenomenon
>>
>> Awareness is not an epiphenomenon at all. It is a real non illusional
>> epistemological phenomenon which is responsible (in some logico-
>> arithmetical sense) the rise of physical reality.
>
> If it's not an epiphenomenon, then are you saying it is not a
> consequence of arithmetic?
Why? I believe free will makes sense in arithmetic. Being
epistemological does not mean being epiphenomenal.
The habit of putting in the trash the higher levels is an aristotelian
habit, founded on an aristotelian dogmatic misconception of mind and
matter. I think.
>
>>
>> It is: NUMBERS ==> CONSCIOUSNESS/DREAMS ==> SHARABLE DREAMS (physical
>> realities).
>
> Isn't that saying consciousness is an epiphenomenon of numbers?
I would say it is a fundamental phenomenological reality.
> What
> are numbers without consciousness?
They are like numbers without prime numbers. Just nonsense.
>
>>
>>> of cognitive
>>> science, when of course cognition arises from feeling (otherwise
>>> babies would come out of the womb solving math equations instead of
>>> crying, and civilizations should evolve binary codes before
>>> ideographic alphabets and cave paintings).
>>
>> I agree that cognition arise from feelings.
>>
>
> Cool
OK. It is a key point. feelings precedes (even in logic and
arithmetic, but also in physical time) thought, and thought precedes
languages, and languages precedes computers, etc. But all that
precedes matter, in the logico-arithmetical reality.
>
>>
>>
>>>>> It is a primitive of
>>>>> human subjectivity, and possibly universal subjectivity. To
>>>>> experience
>>>>> directly, qualitatively, significantly. An action potential is an
>>>>> electromagnetic spike train among neurons. They can be
>>>>> correlated to
>>>>> instantiation of feelings.
>>
>>>> I agree with all this, but that has to be explained, not as taken
>>>> for
>>>> granted.
>>
>>> How can any primitive be explained?
>>
>> It can't, by definition. That is why I don't take matter and
>> consciousness as primitive, given that we can explain them from
>> numbers (and their laws). The contrary is false. We cannot explain
>> numbers by matter or consciousness.
>
> I think that we can explain numbers from consciousness. They are
> sensorimotive teleological gestures refined and polished into an
> instrumental literalism which closely approximates a particular band
> of literal sense that we share with many physical, chemical, and
> primitive biological phenomena. They do not extend beyond a
> superficial treatment of experiences like pain, pleasure, sensation,
> humor, poetry, music, etc.
You are confusing the numbers studied by arithmeticians, and the human
numbers, which can be studied by psychologists, sociologists, etc.
>
>> It can be proved that numbers
>> cannot be explained at all. In that sense, they are provably
>> necessarily primitive.
>
> No more so than colors or words, thoughts, feelings, being, etc.
Why? Comp does provide an explanation of many feelings attributes,
except for a (tiny) part, but then comp meta-explained completely why
we cannot explained them completely.
Why? Nothing third person describable can be "the same thing" as a
lived experience.
> they have the
> same rhythmic patterns, instantiation, and duration. The content,
> however is precisely the opposite: The MRI patterns are topological
> regions of activity in a 3D space, without any particular meaning or
> significance, but with great specificity in terms of precise location
> and public verifiability. The subjective experience is literally the
> opposite. Not topological in space but perceptual in time.
OK. (actually honesty forces me to say that although I was pretty sure
that subjectivity always involves time, I am less sure since I have
done some experiences with Salvia divinorum. The plant has raised some
doubt on this consciousness/time relation).
> If you
> shorten the interval too much, you lose the sense of the perception
> entirely, but the electromagnetic pattern does not vanish. The
> subjective experience has significance and meaning. Without the
> experience side of it, the neural correlate would be no more
> interesting than examining sand dunes. Without taking significance
> into account, there is no purpose to examine the MRI in the first
> place.
OK.
>
>>
>>
>>> Electromagnetism is the name we give to the various phenomena of
>>> matter across space - waving, attracting, repulsing, moving,
>>> intensifying, discharging, radiating, accumulating density,
>>> surfaces,
>>> depth, consistency, etc. Sensorimotivation is the name I'm giving to
>>> the various phenomena of experience (energy) through time -
>>> detecting,
>>> sensing, feeling, being, doing, intention, image, emotion, thought,
>>> meaning, symbol, archetype, metaphor, semiotics, communication,
>>> arithmetic, etc.
>>
>> That's what the numbers can explain, and that what cannot explain the
>> numbers (without assuming them implicitly).
>
> I think that numbers can't explain any of that without the a priori
> expectation of those experiences. Numbers by themselves do not suggest
> anything but more numbers.
Not at all. You have to learn to listen to them. They look a bit
alien, and we can take time to infer the life within, but this is our
*human* current lack of imagination, to talk like John Mikes.
> They have no capacity to recognize their
> own patterns,
Of course they can. Well, of course if you study a bit of computer
science. It is really due to the fact that numbers can recognize their
own patterns, and change them accordingly, that they brought an non
computable amount of mess in Platonia.
> only to be recognized by the computational shadows cast
> by concretely embodied agents of sense and motive.
I don't really believe in concretely embodied agent. (even without
comp). I have never seen that. It is an Aristotelian myth.
I agree that it might look like that exists, but that is an
extrapolation brought by billions years of life struggle, in a very
deep ocean of computations, seen from inside.
Yes. Consciousness is part of arithmetical truth, and it is non
communicable as such by any arithmetical entity.
> or the arithmetic
> itself, the content, while electromagnetism contains only the
> computational consequences of the arithmetic. Yeah, if that's what you
> are saying I like it. It gives me something new.
Cool.
> I don't think it
> captures the significance of what the presence of p does as far as
> making sensorimotive analog through time and electromagnetic being
> discrete across space.
Er... this points on open problem. Comp does not yet decide between a
continuous space-time or a discrete one. But comp predict that some
physical things have to be continuous, but it might still be only the
probabilities. I dunno.
>
>>
>>>> Hmm... The difference between subjective and sensorimotive would be
>>>> captured by the difference between Bp & p, and Bp & Dt & p. That
>>>> confirms my feeling described above.
>>
>>> I'll get back to you if you can explain the variables better. I
>>> tried
>>> Googling them but nothing clear comes up for me.
>>
>> I hope that what I wrote above helps a bit. There are good book on
>> the
>> subject, but you need to follow some course in mathematical logic, to
>> get familiar with it.
>
> I think that there is a cost associated with relying exclusively on
> mathematical logic in a TOE though. My hypothesis shows how modal
> agreements magnify the in-language and attenuate the outward
> sensitivity. Like a gaggle of teenagers hanging around in a pack,
> talking to each other incessantly and oblivious to the world.
Hmm...
This does not make much sense to me ...
> so that the exact circumstance of someone's
> birth - the thoughts and feelings of the doctor and nurse, the sound
> of the cars outside, the proximity to the vineyards and the
> ocean...all of that may need to be reproduced to instantiate a
> particular identity.
... but I agree with this, although the subjective memory of all this
might be quite enough, most probably. But even if the ocean is needed,
it would only make the comp subst level lower.
>
>>
>>
>>
>>>> We would also be led to the peculiar situation
>>>> where machine could correctly prove that they are not machine,
>>
>>> I don't see how matter as a primitive makes machines able to prove
>>> that they are not machines.
>>
>> I was unclear. What I say is that if a machine convince herself,
>> with
>> your help perhaps, that some pirimitive matter exists and has a role
>> for the instantiation of her consciousness, then such a machine will
>> eventually conclude (by a way similar to UDA) that she is not a
>> machine. If such machine is ideally correct, she would conclude
>> correctly that she is not a machine. This comes from the fact that
>> the
>> UDA reasonning can be done by machines (as AUDA illustrated in some
>> admittedly abstract way). You might intuit this if you take the time
>> to follow the UD argument.
>>
>
> Hmm. Not sure I get it. I sort of get that the mathematical
> proposition of a matter-like topology would give rise to some novelty
> through computational non-accessibility but I don't know that the
> novelty would necessarily seem non-mechanical.
If it is mechanical, it is Turing emulable.
If it is Turing emulable, it is "already" emulated infinitely often in
arithmetical truth (even in a tiny part of arithmetical truth).
>
>>> I think a machine machine (or something we
>>> presume is a machine) proves whether of not it is a machine by how
>>> it
>>> responds to errors or hardware failures.
>>
>> A machine can never prove, still less know, that she is a machine.
>> Even machine have to make a leap of faith to admit mechanism. Most
>> machines will be 'naturally' against comp, before introspecting
>> deeper, and reasoning deeper, so that they can infer the possibility
>> (but nothing more).
>
> I'm not against the reasoning of that, I just don't think it's a
> compelling basis for rich perception. Sure, everyone's reality tunnel
> looks like reality and not a tunnel, but that doesn't explain why the
> contents of the tunnel are so interesting and so real.
Because arithmetical truth has this basic property: the more you know
about it, the less you know about it. Imagine a very dark infinite
place (machine ignorance): at first you see about nothing, so you can
still believe the place is not so big. but the more you put light on
it, the more you grasp how big the place is.
>
>>
>>> You could maybe say that what
>>> we are made of is an accumulation of the universe's favorite errors,
>>> failures, and aberrations.
>>
>> Partially, yes. Even partial lies. Perhaps. I'm not sure.
>
> Sure, yes. Partial lies are probably the only way to be certain of
> keeping truth alive. Indra's Net of Bullshit, haha. Seriously though,
> you need the alchemical base alloys to hide the precious metal within,
> otherwise it wouldn't be precious.
Cool image.
>
>>
>>
>>
>>>> making
>>>> all possible discourses of machine being of the type Bf. You might
>>>> eventually change my mind on the non provability of comp (as
>>>> opposed
>>>> to the non recognizability of the our level of comp). For this you
>>>> should convince the machine that material is necessarily
>>>> primitive. I
>>>> begin to doubt that non-comp can make any sense. Hmm...
>>
>>> If I pull the plug on the machine, then the machine halts. Why
>>> should
>>> that be the case were machine independent of material substrate?
>>
>> Because machines can have long and complex computational histories.
>> If you pull the plug on the machine, you act on her 3-body that she
>> share the existence with you, and so in the normal histories she will
>> disfunction with a probability very near 1. From the points of view
>> of
>> the machine she will survive in the computation which are the closer
>> with those normal computations (that's explains the comp-immortality,
>> which can already be explained in the inferred QM of nature).
>
> So a computer keeps computing even when you turn it off? That would be
> hard to swallow if you are saying that.
If your computer run a complex computation, making it possible for a
consciousness to manifest itself relatively to you, and if you pull
the plug on your computer, then relatively to you, that consciousness
will no more be able to manifest itself. But from the view of that
consciousness itself, it will continue to be manifested on
sufficiently similar computations, run by sufficiently similar
computer, somewhere in UD*.
If you want to stay in relation with that consciousness, you will have
to pull the plug on yourself simultaneously.
But this is true *only* in principle. To do this in practice, you have
to assure that you unplug your self at the right level. If not, you
might just end up in a universe, where there is no plug, or something.
We cannot know our level, so to "unplug oneself" cannot really be
defined at all.
But yes, comp implies immortality. Actually it implies many form of
immortality.
It is the big difference between Aristotle an Plato. With the first we
are mortal souls, imprisoned in a primary material universe. With
Plato/Plotinus, we are immortal soul imprisoned in the consciousness
prison (Rossler's image).
On the point of immortality, note that the christians depart from the
atheists, in keeping up Plato's immortality. Comp is a little more
christian than atheistic. Of course, comp departs from both christians
and atheists on the aristotelian *primary* matter idea.
Bruno
On Oct 5, 10:15 am, Quentin Anciaux <allco...@gmail.com> wrote:That would require that the model of the brain be closer than
> No they are not saying that. They are saying that a model of the brain fed
> with the same inputs as a real brain will act as the real brain... if it was
> not the case, the model would be wrong so you could not label it as a model
> of the brain.
genetically identical, since identical twins and conjoined twins do
not always respond the same way to the same inputs.
That may not be
possible, since the epigenetic variation and developmental influences
may not be knowable or reproducible. It's a 'Boys From Brazil' theory.
Cool sci-fi, but I don't think we will ever have to worry about
considering it as a real possibility. We know nothing about what the
substitution level of the 'same inputs' would be either. Can you say
that making a brain of a 10 year old would not require 10 years of
sequential neural imprinting or that the imprinting would be any less
complex to develop than it would be to create than the world itself?
Reacting is not experiencing though. A picture of a brain can react
>
> They never said they could know which inputs you could have and they don't
> have to. They just have to know the transition rule (biochemichal/physical)
> of each neurons and as the brain respect physics so as the model, and so it
> will react the same way.
like a brain, but it doesn't mean there is an experiential correlate
there. Just because the picture is 3D and has some computation behind
it instead of just a recording, why would that make it suddenly have
an experience?
You have to care about the movies at that level because that's what
>
> You do the same mistake with your tv pixel analogy. If I know all the
> transition rule of *a pixel* according to input... I can build a model of a
> TV that will *exactly* display the same thing as the real TV for the same
> inputs without knowing anything about movies/show/whatever... I don't care
> about movies at that level. They never said that they would explain/predict
> the input to the tv, just replicate the tv.
consciousness is in the metaphor. If you don't have an experience of
watching a movie, then you just have an a-signifying non-pattern of
unrelated pixels. You need a perceiver, and audience to turn the image
into something that makes sense. It's like saying that you could write
a piece of software that could be used as a replacement for a monitor.
It doesn't matter if you have a video card in the computer and drivers
to run it, without the actual hardware screen plugged into it there is
no way for us to see it. A computer does not come with it's own screen
built into the interior of it's microprocessors
- but we do have the
equivalent of that. Our experience cannot be seen from our neurology,
you have to already know it's there. Building a model based only on
neurology doesn't mean that experience comes with it any more than a
video driver means you don't need a monitor.
Craig
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
>> In fact, Craig himself
>> denies that his theory would manifest as violation of physical law,
>> and is therefore inconsistent.
>
> There is no inconsistency. You're just not understanding what I'm
> saying because you are only willing to think in terms of reactive
> strategies for neutralizing the threat to your common sense (which is
> a cumulative entanglement of autobiographical experiences and
> understandings, interpretations of cultural traditions and
> perspectives, etc).
If you are right then there would be a violation of physical law in
the brain. You have said as much, then denied it. You have said that
neurons firing in the brain can't be just due to a chain of
biochemical events. That would mean that, somewhere, a neuron fires
where examination of its physical state would suggest that it should
not fire. You can't have it both ways: EITHER the neurons all fire due
to detectable physical causes OR some neurons do not fire due to
detectable physical causes.
--
Stathis Papaioannou
Firstly, it is theoretically possible to model the brain arbitrarily
closely, even if technically difficult. Secondly, it is enough for the
purposes of the discussion to model a generic brain, not a particular
brain.
>> They never said they could know which inputs you could have and they don't
>> have to. They just have to know the transition rule (biochemichal/physical)
>> of each neurons and as the brain respect physics so as the model, and so it
>> will react the same way.
>
> Reacting is not experiencing though. A picture of a brain can react
> like a brain, but it doesn't mean there is an experiential correlate
> there. Just because the picture is 3D and has some computation behind
> it instead of just a recording, why would that make it suddenly have
> an experience?
In the first instance, yes, you might not be sure iif the artificial
brain is a zombie. But the fading qualia thought experiments shows
that if it is a zombie it would allow you to make absurd creatures,
partial zombies (defined as someone who lacks a particular conscious
modality but behaves normally and doesn't realise anything is wrong).
The only way to avoid the partial zombies is if the brain model
replicates consciousness along with function.
>> You do the same mistake with your tv pixel analogy. If I know all the
>> transition rule of *a pixel* according to input... I can build a model of a
>> TV that will *exactly* display the same thing as the real TV for the same
>> inputs without knowing anything about movies/show/whatever... I don't care
>> about movies at that level. They never said that they would explain/predict
>> the input to the tv, just replicate the tv.
>
> You have to care about the movies at that level because that's what
> consciousness is in the metaphor. If you don't have an experience of
> watching a movie, then you just have an a-signifying non-pattern of
> unrelated pixels. You need a perceiver, and audience to turn the image
> into something that makes sense. It's like saying that you could write
> a piece of software that could be used as a replacement for a monitor.
> It doesn't matter if you have a video card in the computer and drivers
> to run it, without the actual hardware screen plugged into it there is
> no way for us to see it. A computer does not come with it's own screen
> built into the interior of it's microprocessors - but we do have the
> equivalent of that. Our experience cannot be seen from our neurology,
> you have to already know it's there. Building a model based only on
> neurology doesn't mean that experience comes with it any more than a
> video driver means you don't need a monitor.
A model of the TV will reproduce the externally observable behaviour
of a TV, given the same inputs. That's what a model is. A model of a
brain would reproduce the externally observable behaviour of a brain.
Whether it would also reproduce the consciousness is a further
question, and the fading qualia thought experiment shows that it
would.
--
Stathis Papaioannou
>> If you are right then there would be a violation of physical law in
>> the brain. You have said as much, then denied it. You have said that
>> neurons firing in the brain can't be just due to a chain of
>> biochemical events.
>
> They can be due to a chain of biochemical events, but they also *are*
> biochemical events, and therefore can influence them intentionally as
> well as be influenced by them. I don't understand why this is such a
> controversial ideal. Just think of the way that you actually function
> right now. Your personal motives driving what *you* do with *your*
> mind and *your* body. If the mind could be understood just as
> biochemical events among neurons, then we would have no way to think
> of our bodies as ours - the brain would not need to think of itself in
> any other terms other than the biochemical events that it literally
> is. Why make up some bogus GUI if there is no user?
The mind may not be understandable in terms of biochemical events but
the observable behaviour of the brain can be.
>>That would mean that, somewhere, a neuron fires
>> where examination of its physical state would suggest that it should
>> not fire.
>
> I guess you are never going to get tired of me correcting this
> factually incorrect assumption.
>
> The physical state of a neuron only suggests whether it is firing or
> not firing at the moment - not the circumstances under which it might
> fire. If you examine neurons in someone's amygdala, how is that going
> to tell you whether or not they are going to play poker next week or
> not? If the neurons feel like firing, does a casino appear?
Whether a neuron in the amygdala or anywhere else fires depends on its
present state, inputs from the neurons with which it interfaces and
other aspects of its environment including things such as temperature,
pH and ion concentrations. If the person thinks about gambling, that
changes the inputs to the neuron and causes it to fire. It can't fire
without any physical change. It can't fire without any physical
change. It can't fire without any physical change.
--
Stathis Papaioannou
On Oct 5, 10:39 pm, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Thu, Oct 6, 2011 at 12:39 PM, Craig Weinberg <whatsons...@gmail.com> wrote:
> >> If you are right then there would be a violation of physical law in
> >> the brain. You have said as much, then denied it. You have said that
> >> neurons firing in the brain can't be just due to a chain of
> >> biochemical events.
>
> > They can be due to a chain of biochemical events, but they also *are*
> > biochemical events, and therefore can influence them intentionally as
> > well as be influenced by them. I don't understand why this is such a
> > controversial ideal. Just think of the way that you actually function
> > right now. Your personal motives driving what *you* do with *your*
> > mind and *your* body. If the mind could be understood just as
> > biochemical events among neurons, then we would have no way to think
> > of our bodies as ours - the brain would not need to think of itself in
> > any other terms other than the biochemical events that it literally
> > is. Why make up some bogus GUI if there is no user?
>
> The mind may not be understandable in terms of biochemical events but
> the observable behaviour of the brain can be.
Yes, the 3-p physical behaviors that can be observed with our
contemporary instruments can be understood in terms of biochemical
events, but that doesn't mean that they can be modeled accurately or
that those models would be able to produce 1-p experience by
themselves. We can understand the behaviors of an amoeba in terms of
biochemical events but that doesn't mean we can tell which direction
it's going to move in.
>
> >>That would mean that, somewhere, a neuron fires
> >> where examination of its physical state would suggest that it should
> >> not fire.
>
> > I guess you are never going to get tired of me correcting this
> > factually incorrect assumption.
>
> > The physical state of a neuron only suggests whether it is firing or
> > not firing at the moment - not the circumstances under which it might
> > fire. If you examine neurons in someone's amygdala, how is that going
> > to tell you whether or not they are going to play poker next week or
> > not? If the neurons feel like firing, does a casino appear?
>
> Whether a neuron in the amygdala or anywhere else fires depends on its
> present state, inputs from the neurons with which it interfaces and
> other aspects of its environment including things such as temperature,
> pH and ion concentrations. If the person thinks about gambling, that
> changes the inputs to the neuron and causes it to fire. It can't fire
> without any physical change. It can't fire without any physical
> change. It can't fire without any physical change.
"If the person thinks about gambling, that changes the inputs..."
Start there. If a person thinks... means that they are initiating the
physical change with their thought.
Their thought is the
electromagnetic change which drives the physical change. The thought
or intention is the signifying sensorimotive view, the electomagnetic
view is a-signifying voltage, charge, detection of ligands, etc. It is
bidirectional so that the reason for firing can be driven by the
biochemistry, or by the content of a person's mind. This is just
common sense, it's not disputable without sophistry.
Here's how I think it might work: You can be excited because you
decide to think about something that excites you, or you can ingest a
stimulant drug and you will become excited in general and that
excitement will lead your mind by the nose to the subjects that most
excite it. They are the same thing but going in opposite directions.
Think of it as induction:
Imagine that this works like an electric rectifier (http://
electrapk.com/wp-content/uploads/2011/08/half-wave-rectifier-with-
transformer.jpg) except that instead of electric current generating a
magnetic field through a coil which pushes or pulls the magnetic force
within the other coil - the brain's electromagnetic field is pushing
to and/or pulling from changes in the sensorimotive experience. The
difference though is that with a rectifier, it is the identical
physical ontology which is mirrored in parallel (electromagnetic :||:
magnetic-electric) whereas in sensorimotive *the ontology is
perpendicular* (meaning that what it actually is can only be
*experiences linked together through time*, not *objects separated
across space*), so there are four mirrorings:
electromagnetic :||: sensorimotive (3SI) - brain changes induce
feelings
sensorimotive :||: electromagnetic (1SI) - feelings induce brain
changes
magnetic-electric :||: motive-sensory (3MI) - mechanical actions
induce involuntary reactions
and motive-sensory :||: magnetic-electric (1MI) - voluntary actions
induce mechanical actions
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
>> The mind may not be understandable in terms of biochemical events but
>> the observable behaviour of the brain can be.
>
> Yes, the 3-p physical behaviors that can be observed with our
> contemporary instruments can be understood in terms of biochemical
> events, but that doesn't mean that they can be modeled accurately or
> that those models would be able to produce 1-p experience by
> themselves. We can understand the behaviors of an amoeba in terms of
> biochemical events but that doesn't mean we can tell which direction
> it's going to move in.
It's also difficult to tell exactly which way a leaf in the wind will
move. The leaf may have qualia: it is something-it-is-like to be a
leaf, and the qualia may differ depending on whether the leaf goes
left or right. As with a brain, the leaf does not break any physical
laws and its behaviour can be completely described in terms of
physical processes, but such a description would leave out an
important part of the picture, the subjectivity. While it may be
correct to say that the leaf moves to the right because it wants to
move to the right, since moving to the right is associated with
right-moving willfulness, this does not mean that the qualia have a
causal effect on its behaviour. A causal effect of the qualia on the
leaf's behaviour would mean that the leaf moves contrary to physical
laws, confounding scientists by moving to the right when the forces on
it suggest it should move to the left. It's similar with the brain: a
direct causal effect of qualia on behaviour would mean that neurons
fire when their physical state would suggest that they not fire. I'm
sorry that you don't like this, but it is what it would mean if the
relationship between qualia and physical activity were bidirectional
rather than the qualia being supervenient.
--
Stathis Papaioannou
It's not that I don't like it, it's just that I see that you are wrong
>I'm
> sorry that you don't like this,
about it yet you want me to treat it as a plausible theisis. The
consequences of your view is that we can't tell the difference between
a living protozoa and a hairy bubble. It's sophistry. You see a salmon
swim upstream, does that not mean they 'move contrary to physical
laws'? How does the salmon do that? Is it magic? Salmon cannot exist.
Such a thing would confound scientists!
Life is ordinary on this planet. It uses the laws of physics for it's
own purposes which may or may not relate to physical existence. I'm
sorry that you don't like that, but in a contest between theory and
reality, reality always wins. It doesn't matter if you don't
understand it, you have my condolences, but I do understand it and I'm
telling you that it is for that reason that I am certain your view is
factually less complete than mine. My view includes your view, but
your view ignores mine.
If qualia were not bidirectional, you could not read or write.
> but it is what it would mean if the
> relationship between qualia and physical activity were bidirectional
> rather than the qualia being supervenient.
Craig
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
On 08/10/2011, at 12:02 AM, Craig Weinberg <whats...@gmail.com> wrote:
>
>> it is something-it-is-like to be a
>> leaf, and the qualia may differ depending on whether the leaf goes
>> left or right. As with a brain, the leaf does not break any physical
>> laws and its behaviour can be completely described in terms of
>> physical processes, but such a description would leave out an
>> important part of the picture, the subjectivity. While it may be
>> correct to say that the leaf moves to the right because it wants to
>> move to the right, since moving to the right is associated with
>> right-moving willfulness, this does not mean that the qualia have a
>> causal effect on its behaviour.
>
> No because if the wind is also pushing other inanimate objects in the
> same direction and the leaf never resists that, then we can assume
> that it has no ability to choose it's direction.
The leaf has the ability to choose its direction to the same extent that a motile cell such as an amoeba does. The amoeba follows chemotactic gradients, the leaf follows the wind. The amoeba does not move in a direction contrary to physics and neither does the leaf. The amoeba may feel that it is choosing where to go and so might the leaf.
>
>> A causal effect of the qualia on the
>> leaf's behaviour would mean that the leaf moves contrary to physical
>> laws, confounding scientists by moving to the right when the forces on
>> it suggest it should move to the left. It's similar with the brain: a
>> direct causal effect of qualia on behaviour would mean that neurons
>> fire when their physical state would suggest that they not fire.
>
> You aren't hearing me, so I am going to start counting how many times
> I answer your false assertion - even though it's probably been at
> least 5 or 6 times, I'll start the countdown at ten, and at 0, I'm not
> going to answer this question again from you.
>
> 10: There is no such thing as a physical state which suggests whether
> a neuron that can fire (ie, has repolarized, replenished, or otherwise
> recovered from it's last firing) actually will fire. You can induce it
> to fire manually, but left to it's own devices, you can't say that a
> neuron which triggers a voluntary movement is going to fire without
> knowing when the person whose arm it is decides to move it. You can
> look at every nerve in my body right now and not know whether I will
> be standing or sitting in one hour's time. There is no physical law
> whatsoever that has an opinion one way or the other either way.
If a motor neuron involved in voluntary activity fires where you would not predict it would fire given its internal state and the inputs then it is *by definition* acting contrary to physical law.
>> If a motor neuron involved in voluntary activity fires where you would not predict it would fire given its internal state and the inputs then it is *by definition* acting contrary to physical law.
>
> Every firing of motor neurons involved in voluntarily activity fires
> where you would not predict, given that the internal state provides no
> prediction and that the inputs are determined by the subject and
> therefore unknowable to anyone outside of the subject.
The internal state of the neuron determines its sensitivity to inputs.
The internal state is complex but it includes things such as the
membrane potential, the intracellular ion concentrations, the number,
type and location of ion channels, to what extent the synaptic
vesicles have filled with neurotransmitter, and multiple other
factors. The inputs consist of every environmental factor that might
potentially affect the neuron such as the extracellular ionic
concentrations, pH, temperature, synaptic connections, concentration
of neurotransmitter in the synapse, concentration of enzymes which
break down neurotransmitter and so on. If the neuron fires where
consideration of these factors would lead to a prediction that it
should not fire then that is by definition the neuron acting contrary
to physical law. How else would you define it?
--
Stathis Papaioannou
Of course all the parts of the car determine how it will move! You can
predict exactly what the car will do if you know how it works and you
have the inputs. A model of the car, such as a car racing computer
game, does not include the driver and the whole universe, as you seem
to think, just the car.
>> If the neuron fires where
>> consideration of these factors would lead to a prediction that it
>> should not fire then that is by definition the neuron acting contrary
>> to physical law.
>
> There is no such thing as a factor which leads to a prediction of when
> efferent nerves will fire. Even if you say that the subject is just
> regions of the brain, it is still those regions, those tissues and
> neurons which *decide* to fire as a first cause - without any
> deterministic precursor that could ever be predicted with any degree
> of accuracy without access to the private subjective content of the
> decision process. Seeing a nerve fire doesn't tell you when it's going
> to fire again, just as seeing a car make a left turn doesn't tell you
> what direction it's going to turn after that.
So a neuron fires in those regions of the brain associated with
subjectivity where the biochemistry suggests it would not fire.
Ligand-activated ion channels open without any ligand present, or
perhaps an action potential propagates down the axon without any
change in ion concentrations. That is what I call "contrary to
physical laws". You don't agree, so you must have some other idea of
what a neuron would have to do to qualify as firing contrary to
physical laws. What is it?
--
Stathis Papaioannou