Bruno List continued

10 views
Skip to first unread message

Craig Weinberg

unread,
Sep 22, 2011, 8:42:46 PM9/22/11
to Everything List
(can't figure out how to get to the end of these long threads without
clicking through every page... ok to continue here?)

On Sep 22, 1:02 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 21 Sep 2011, at 23:26, Craig Weinberg wrote:
>
> > On Sep 21, 2:08 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
> >> On 20 Sep 2011, at 04:58, Craig Weinberg wrote:
>
> >>>>> I include comparison as a function of counting.
>
> >>>> Counting + the full first order logic is not enough for comparison.
> >>>> Counting + second order logic might be, but then second order logic
> >>>> is
> >>>> really set theory in disguise.
>
> >>> Isn't it necessary to be able to tell the difference between one
> >>> count
> >>> and one count?
>
> >>> In order for x, (x+x), (x+x+x) to exist there must be an implicit
> >>> comparison between 1 and it's successor to establish succession,
> >>> mustn't there? Otherwise it's just x, x, x.
>
> >> I have no clue on what you are trying to say.
>
> > I'm saying that to assert that two is different from one is a
> > comparison, and that the assertion of difference between predecessor
> > and successor is the root essence of what counting is. Counting is
> > nothing but a process of comparisons.
>
> This is unclear as long as you don't make your assumptions explicit.

My assumption is that the experience of thinking of quantities in a
series, like 1, 2, 3, 4 is an example of counting.

>
>
>
> >>>>> You can't really have
> >>>>> one without the other.
>
> >>>> It depends on what you assume at the start. I have still no clue of
> >>>> what is your theory, except that strange, and alas familiar,
> >>>> skepticism on numbers and machine, which is conceptually very
> >>>> demanding since Gödel 1931.
>
> >>> I think that's your own prejudice blinding you from seeing my ideas.
>
> >> Which prejudices?
>
> > The prejudice of arithmetic supremacy.
>
> I have chosen arithmetic because it is well taught in school. I could
> use any universal (in the Post Turing Kleene Church comp sense)
> machine or theory. And this follows from mechanism. The doctor encoded
> your actual state in a finite device.

I don't understand. Are you saying that you are not arithmetically
biased or that it's natural/unavoidable to be biased?

>
>
>
> >> You are the one talking like if you knew (how?) that some theory
> >> (mechanism) is false, without providing a refutation.
>
> > What kind of refutation would you like?
>
> A proof that mechanism entails 0 = 1.

That demands that mechanism be disproved mechanically, which gives an
indication of what the problem with it is, but you have to read
between the lines to get it. A literal approach has limitations which
arise from it's very investment in literalism.

> Note a personal opinion according to which actual human machines are
> creepy.

Not sure what you mean. Individual humans can certainly seem creepy,
but I'm talking about there being a particular difference in our
perception of living things vs non-living things which imitate living
things. Even true of plants. Plastic plants are somewhat creepy in the
same way for the same reason. I don't think that it can be assumed
therefore that humans are only machines. They may be partially
machines, but machines may not ever be a complete description of
humanity.

>
> > Mechanism is false as an
> > explanation of consciousness
>
> Mechanism is not proposed as an explanation of consciousness, but as a
> survival technic. The explanation of consciousness just appear to be
> given by any UMs which self-introspect (but that is in the consequence
> of mechanism, not in the assumption). It reduces the mind-body problem
> to a mathematical body problem.

Survival of what? It sounds like you are saying that consciousness is
just a consequence of being conscious, and that this makes the mind
into math.

>
> > because I think that consciousness arises
> > from feeling which arises from sensation. Perception cannot be
> > constructed out of logic but logic always can only arise out of
> > perception.
>
> Right. But I use logic+arithmetic, and substituting "logic+arithmetic"
> for your "logic" makes your statement equivalent with non comp. So you
> beg the question.

I don't think that perception can be constructed out of logic
+arithmetic either, but logic+arithmetic are covered under perception.

>
>
>
> >>> You are defending the insights of post Gödelian understanding but I
> >>> have no bone to pick with those insights at all. I embrace what I
> >>> understand of those kinds of ideas; incompleteness, autopoeisis,
> >>> automation, simulation, etc. I just think that the progression of
> >>> these ideas lead to the mirror image of consciousness rather than
> >>> genuine sentience.
> >>> Nothing wrong with that, and for developing intelligent servants,
> >>> it's
> >>> is exactly what we would want to use (otherwise they will most
> >>> enslave
> >>> us). We can even gain great insights into our own nature by
> >>> understanding our similarities and differences to what I would call
> >>> intelliform arithmetic, but in all of the fruits of this approach we
> >>> have seen thus far, there is a distinct quality of aimless
> >>> repetition,
> >>> even if not unpleasantly so (http://www.youtube.com/watch?
> >>> v=ZZu5LQ56T18)
>
> >>> Some of the musicality can be attributed to the sampled piano as
> >>> well.
> >>> When you use a fundamental unit which is driven more exclusively by
> >>> digital mathematics, what we get I think sounds more like the native
> >>> chirps and pulses of abiotic semiconductors (http://www.youtube.com/
> >>> watch?v=Dh9EglZJvZs).
>
> >>> I think that my reservations about machine sentience are not at all
> >>> borne of skepticism but rather aesthetic supersensitivity. I can
> >>> hear
> >>> what the machine is, and what it is will not become what we are, but
> >>> rather something slightly (but very significantly from our
> >>> perspective
> >>> at least) different.
>
> >> Who we?
>
> > We humans, or maybe even we animals.
>
> Then it is trivial and has no bearing on mechanism. The machine you
> can hear are, I guess, the human made machine. I talk about all
> machines (devices determined by computable laws).

I would say that there are no devices determined by computable laws
alone. They all have a non-comp substance component that contributes
equally to the full phenomenology of the device.

>
>
>
> >> All what I hear is "human made machines are creepy, so I am not a
> >> machine, not even a natural one?".
> >> This is irrational, and non valid.
>
> > I'm not saying that I'm not a machine, I'm just saying that I am also
> > the opposite of a machine.
>
> This follows from mechanism. If 3-I is a machine, then, from my
> perspective, 1-I is not a machine.

I think it's a continuum. Some parts of 1-I are more or less
mechanical than others, and some 3-I machine appearances are more or
less mechanical than others. Poetry is an example of a 1-p experience
which is less mechanical than a 1-p experience of running in place. A
rabbit is less mechanical of a 3-p experience than a mailbox. Do you
agree or do you think it must be a binary distinction?

>
> > It's not based upon a presumed truth of
> > creepy stereotypes, but the existence and coherence of those
> > stereotypes supports the other observations which suggest a
> > fundamental difference between machine logic and sentient feeling.
>
> Logic + arithmetic. The devil is in the detail.

Why would the addition of arithmetic address feeling?

>
>
>
> >>>>> As for exploring and referring to themselves I
> >>>>> think that's just projection of our own 1p experience onto
> >>>>> mechanism.
>
> >>>> That is possible, and literally necessary. I am currently
> >>>> projecting
> >>>> my 1p on you. That is not a reason for saying your don't have your
> >>>> own
> >>>> 1p. So you are correct, but it is not an argument against
> >>>> mechanism.
>
> >>> That's only if you believe that 1p is a solipsistic simulation. With
> >>> my sense-based model,
>
> >> I suggest you use the word "theory" instead of model which has a
> >> precise meaning for logicians.
> >> When I asked you to provide the theory, you said it was poetry.
>
> > I didn't say that it was only poetry. My sense-based theory is that
> > sensorimotive perception is the ontological complement to
> > electromagnetic relativity.
>
> Define "ontological complement to electromagnetic relativity." Please
> be clear on what you are assuming to make this concept sense full.

Ontological complement, meaning it is the other half of the process or
principle behind electromagnetism and relativity (which I see as one
thing; roughly 'The Laws of Physics' which I see as 3-p, mechanical,
and pertaining to matter and energy as objects rather than
experiences). When we observe physical phenomena in 3-p changing and
moving, we attribute it to 'forces' and 'fields' which exist in space
but within ourselves we experience those same phenomena as feelings
through time (sense) which insist upon our participation (motive).

>
> > Poetry is your term that you injected into
> > this.
> > I was just confirming your intuition that poetry is an example
> > of how sensorimotive phenomena work - figurative semantic association
> > of qualities rather than literal mechanistic functions of quantity.
>
> You were then just eluding the definition of sensorimotive. You
> continue to do rhetorical tricks.

I'm not eluding the definition, I am saying that by definition it
cannot be literally defined. It is the opposite of literal - it is
figurative. That's how it gets one thing (I/we) out of many (the
experience of a trillion neurons or billions of human beings).

>
>
>
> >> I find a bit grave to use poetry to make strong negative statement on
> >> the possibilities of some entities.
>
> > That's because you are an arithmetic supremacist,
>
> I assume things like 17 is prime!

I have no problem with 17 being prime, of course that is true. I would
even say that the kinds of truth arithmetic sensorimotives present is
supremely unambiguous, but I think that conflating unambiguity with
universal truth is an assumption which needs to be examined much more
carefully and questioned deeply. What would unambiguous facts be
without ambiguous fiction? Not just from a anthropocentric point of
view, but ontologically, how do you have something that can be
qualified as arithmetic if nothing is not arithmetic? Arithmetic
compared to what? What can it be but life, love, awareness, qualia,
free will?

>
> > so therefore cannot
> > help yourself but to diminish the significance of subjective
> > significance.
>
> On the contrary, mechanism single out the fundamental (but not
> primitive) character of consciousness and subjectivity. You are the
> one who dismiss the subjectivity of entities.

It singles it out as just another generic process so that a trash can
that says THANK YOU stamped on the lid is not much different from a
handwritten letter of gratitude from a trusted friend. I don't dismiss
the subjectivity of any physical entity, I just suspect a finite range
of perceptual channels which scale to the caliber of the particular
physical entity or class of entities.

>
>
>
> >>> your perceptions of me are the invariance
> >>> between your 1p projections and my 3p reflections. We can look at a
> >>> Rorschach inkblot and understand (under typical waking states of
> >>> consciousness) that the images we see are not being spontaneously
> >>> generated by the inkblot.
>
> >>> To say that a machine is referring to itself or exploring is to
> >>> anthropomorphize it's behavior,
>
> >> I disagree. the whole point of the theory of self-reference is that
> >> it
> >> is a 3p technical discovery.
>
> > I understand, but I don't think that it's truly 3p. How would we know
> > if it was?
>
> Gödel's theorem would have convince nobody if the self-reference he
> used was based on 1p.

Why not? It's just intersubjective 1p plural. The 1p that we share
with the least common denominators of existence.

>This is only one reason among an infinity of
> them. If you believe some 1p is used there, you have to single out
> where, and not in the trivial manner that all 3p notion can be
> understood only by first person. Gödel's self-reference is as much 3p
> than 1+1=2.

1+1=2 is 1-p also. It's part of the firmament of the 'psychic unity of
mankind', but it is still something that we have to learn as young
children through language and cognition. 2 is just another name for 1
and 1 together.

>
>
>
> >> I have used to write "amoeabas" (self-
> >> reproducing programs---this has been done by many others), and to
> >> build "planarias", that is, programs that you can cut in pieces, and
> >> each pieces generates the whole program, despite having quite
> >> different functional rôle. The self-reproduction problem has been
> >> formulate precisely by Descartes, and solved conceptually by Stephen
> >> Kleene. For the Planaria, I have used a generalization by John Case.
> >> The existence of logic of self-reference G and G* relies on all this.
> >> There is no anthropomorphism: those program refers to themselves in
> >> the 3p way in precise and verifiable sense. I have often explain the
> >> basic idea (cf Dx = T(xx) => DD = T(DD)). A major part of theoretical
> >> computer science is based on the existence of such computational
> >> "fixed points".
>
> > These are 1-p conceptualizations for you, Kleene, Case, Descartes, etc
> > which refer to your logical reductions of 1-p selfhood from a pseudo
> > 3-
> > p voyeur perspective.
>
> This is a universal argument. So it is empty.

I'm not familiar with universal arguments.

>
> > cf Dx = T(xx) => DD = T(DD) does not feel
> > anything,
>
> Like if I was pretending that. On the contrary I distinguish
> explicitly the 1-self and the 3-self.
>
> > it is just a way to access arithmetic potentials of our own
> > 1-p process.
>
> You can say that about the jews, the homosexual, the Mexican, the
> belgians, the animals, the aliens, etc. The argument is again
> universal, and thus not valid.
>

I think that position is non-falsifiable sophistry. Saying that a
trash can probably can't feel (very much) is not the same thing as
saying that Lithuanians can't feel.

>
>
> >>> which, objectively is neither
> >>> completely random nor intentional, but merely inevitable by the
> >>> conditions of the script. It's a precisely animated inkblot, begging
> >>> for misplaced and displaced interpretation.
>
> >>>>> To set a function equal to another is not to say that either
> >>>>> function
> >>>>> or the 'equality' knows what they refer to or that they refer at
> >>>>> all.
> >>>>> A program only instructs - If X then Y, but there is nothing to
> >>>>> suggest that it understands what X or Y is or the relation between
> >>>>> them.
>
> >>>> Nor is necessary to believe that an electron has any idea of the
> >>>> working of QED, or of what a proton is.
>
> >>> I think that it is. What we think of as an electron or a proton is
> >>> 3-p
> >>> exterior of the senses and motives (ideas) of QED. We have an idea
> >>> of
> >>> the workings of our niche, so it stands to reason that this
> >>> sensemaking capacity is part of what the universe can do.
>
> >> OK, but the point is that it is part of the arithmetical reality too.
>
> > It has arithmetic qualities to us, but only if we understand
> > arithmetic.
>
> So if Alfred fails to grasp that 1+1=2, it would become false?

No, not at all. It's just undiscovered to him.

> That is extreme anthropomorphism. You could as well take the humans as
> building block of the whole reality.

Well, human perception is the building block of *our* whole reality.
How can that be denied?

>
>
>
> >>> We are the senses and motives of a person's life, which from our 3-p
> >>> looks like a human body or a set of images, autobiographical
> >>> narratives, or a collection of trillions of cells or a single
> >>> individual in a global human civilization. From a non human's 1-p
> >>> vantage point we have no idea what kind of sense our 3-p presence
> >>> makes to them.
>
> >> Sure.
>
> >>>> If you say that an electron needs to have a 1p for interacting
> >>>> with a
> >>>> proton, then I don't see why we could not say that for a program
> >>>> instruction,
>
> >>> Because an instruction has no 3-p existence.
>
> >> Ah?
>
> > It is not enough to have an instruction sequence, the instruction must
> > be executed as physical energy upon a material object (even if it's
> > our own brain chemistry) if it is to have any 3-p consequence.
>
> Not at all. You confuse implementation and physical implementation.
> Even without comp, a physical implementation is just a paricular
> example of implementation.
>
Then you are asserting a zombie implementation.

>
>
> >>> It's just a motive to
> >>> reproduce a motive in something that we perceive as a 3-p objective
> >>> system. Our 1-p motive induces a 3-p consequence which is reflected
> >>> back to us through a 3-p objective system as a pseudo 1-p sequence.
>
> >> This might be true, without making an instruction being a locally 3p
> >> reality.
>
> >>>> on which we can already use intensional stance (like when
> >>>> we say that a routine is waiting for some inputs to get active,
> >>>> etc.
>
> >>> That's more anthropomorphizing. It's not waiting at all. If it was
> >>> it
> >>> would eventually get irritated and leave.
>
> >> It would have been an anthropomorphism in case I did not precise the
> >> use of the intentional stance (with a "t", not an yes, sorry).
>
> > kind of garbled here, not sure what you're saying.
>
> I was saying "with a "t", not with a "s", for the word "intentional",
> which of course has a different meaning than the "intensional" of the
> logicians.
> (I do agree with Hintikka that "intensional" and "intentional" are
> related concept, though, but that is another topic).
>
>
I still don't get it. I'm saying that projecting human sense intention
into a machine is anthropomorphizing.

>
> >>>> But this is delaying the mind-body difficulty in the lower level.
> >>>> There are just no evidence that we have to delay it in the
> >>>> infinitely
> >>>> low level, except the willingness to make mechanism false.
>
> >>> There can't be any 3-p evidence by definition, because mechanism's
> >>> falseness is the difference between it's pseudo or a-signifying 1-p
> >>> and our genuine 1-p experience.
>
> >> Why is it pseudo. Like Stathis explained to you, if it is pseudo, you
> >> either get zombie, or you have to put the level infinitely low, and
> >> our bodies become infinite objects.
>
> > It's pseudo because it's a simulation of a 1-p form with no relevant
> > 1-
> > p contents.
>
> ?

The 1-p of a TV set doesn't match the 1-p of a human TV audience
member. Therefore the TV set is not capable of watching the TV program
is it displaying.

>
> > Zombie or substitution level is in the eye of the
> > beholder.
>
> I will certainly say "no" to the doctor, in case *you* are the doctor.
> Pain, pleasure are NOT in the eyes of any thrid person, but belong to
> the consciousness content (or not) of a person.

That's why I say that being a zombie does not belong to the
consciousness content of a person. It is a comparison made by a third
person observer of a human presentation against their expectations of
said human presentation. Substitution 'level' similarly implies that
there is an objective standard for expectations of humanity. I don't
think that there is such a thing.

>
> > There is no zombie, only prognosia/HADD.
>
> If there is no zombie, then non-comp implies an infinitely low level.

No, it's not that zombies can't theoretically exist, it's that they
don't exist in practice because the whole idea of zombies is a red
herring based upon comp assumptions to begin with. If you don't assume
that substance can be separated from function completely, then there
is no meaning to the idea of zombies. It's like dehydrated water.

>
> > There is no
> > substitution 'level', only a ratios of authenticity.
>
> ?

Say a plastic plant is 100% authentic at a distance of 50 years by the
naked eye, but 20% likely to be deemed authentic at a distance of
three feet. Some people have better eyesight. Some are more familiar
with plants. Some are paying more attention. Some are on
hallucinogens, etc There is no substitution level at which a plastic
plant functions 100% as a plant from all possible observers through
all time and not actually be a genuine plant. Substitution as a
concrete existential reality is a myth. It's just a question of
arbitrarily fixing an acceptable set of perceptions by particular
categories of observers and taking it for functional equivalence.

>
> > The closer your
> > substitute is to native human sensemaking material, the more of the
> > brain can be replaced with it, but with diminishing returns at high
> > levels so that complete replacement would not be desirable.
>
> That is even worst. This entails partial zombies. It does not make
> sense. I remind you that zombie, by definition, cannot be seen as such
> by their behavior at all.

That's the theoretical description of a zombie. Like dehydrated water.
In reality, one observer's zombie is another observer's non-descript
stranger in the park. There is no validity to these observations
relative to the would-be zombie's quality of subjectivity.

>
>
>
> >>> Genuine because it is the native 1-p
> >>> of our 3-p neurology, and not an idiopathic simulacra.
>
> >> You beg the question.
>
> > I don't think I am. I'm saying that a semiconductor computer can't
> > appreciate music because music is a sense experience that is
> > perceptually mismatched to it's sensemaking capabilities - not because
> > of any sentimental prejudice I have against technology overreaching
> > into human domains.
>
> This makes humans magical object, or the subst level infinitely low,
> or it entails zombies.

Not at all. It just makes human music a human subject, one not
necessarily shared with basketballs and water fountains. Just as
Chinese language is a human subject shared by humans who speak
Chinese. The universe is all sense - experiential texts which are
meaningful in different contexts, all overlapping dynamically.

>
>
>
> >>> There is no
> >>> reason to think that our naive theoretical presumptions about 3-p
> >>> substitution level of 1-p would be any more accurate than any of our
> >>> naive theoretical presumptions about anything. We don't know much of
> >>> anything about the substitution level of the psyche.
>
> >> People differ on which one. The neurophilosophers suggest the
> >> neuronal
> >> level. Hammerof suggest the quantum level.
>
> > Those are examples of our contemporary consensus naive theoretical
> > presumptions.
>
> Mechanism is discussed in the literature since thousand of years. It
> has nothing to do with current technology, except for the mathematical
> discovery of universal machine (before computers were build, excepting
> Babbage premature ideas).

I was referring to your examples of neuronal and quantum levels -
which would be relatively contemporary, no?

>
>
>
> >> Everyone agree that if the level is infinitely low, then current
> >> physics is false. To speculate that physics is false for making
> >> machine stupid is a bit of far stretching.
>
> > Physics isn't false, it's just incomplete.
>
> No, it has to false for making the substitution level infinitely low.
> *ALL* theories, including the many one trying to marry gravitation and
> the quantum entails its Turing emulability.

The substitution level isn't infinitely low, it's just not applicable
at all. There is no substitution level of white for black, lead for
gold, up for down, etc. I doubt the objective existence of
'substitution'. Substitution is an interpretation - not necessarily a
voluntary one, but an interpretation nonetheless.

>
> > A good Eurocentric map of
> > the world before the Age of Discovery isn't false, just not applicable
> > to the other hemisphere.
>
> The analogy fails to address the point I made.

If the point you made is that physics has to be false if the human
psyche has no substitution level, then my analogy is that a map of
known territory (physics) doesn't have to be false just because it
doesn't apply to an unknown territory (psyche).

>
>
>
> >>> It seems far from
> >>> scientific at this point to dismiss objections to an arbitrary
> >>> physical substitution level.
>
> >> With all known theories, there is a level. To negate comp you must
> >> diagonalize on all machines, + all machines with oracles, etc. I
> >> think
> >> you misinterpret computer science.
>
> > I'm not trying to interpret computer science, I'm trying to interpret
> > the cosmos.
>
> Well, if there is a cosmos, there are evidence that some computers
> belongs to it. You can't brush them away.
> The cosmos does emulate computers, and computers can emulate cosmoses
> (but not the whole physical reality, by UDA).

I don't brush them away, I just say that it's not so simple as psyche
= computer. Computation can be accomplished with much less psyche than
our perception of that computation might imply.

>
>
>
> >>>>> I've named several examples which illustrate this: Record and CD
> >>>>> players don't learn music.
>
> >>>> Nor do them compute. Or, if yopu see their activity as
> >>>> computations,
> >>>> it is not the kind of computation which can think. you need self-
> >>>> reference, and enough information loops, short and long term
> >>>> memories,
> >>>> universal hidden goal, etc.
>
> >>> It's not compelling to me. You can have fancy playlists on internet
> >>> radio like Pandora or iTunes which I think satisfy your criteria
> >>> to a
> >>> minimal extent to establish some hint that the program was
> >>> understanding music or the user. That's the marketing sell, but it's
> >>> hollow. It doesn't work that well. It's limitations are perhaps
> >>> subtle
> >>> to describe in 3-p terms, but it just doesn't know music.
> >>> Listening to
> >>> it's occasionally fruitful but oddly dissonant selections are, like
> >>> the cellular automation music, very definitely missing something.
>
> >>> You may not be as sensitive to it, or you may account for it by
> >>> promising that these are just newborn tadpoles, but I can see that
> >>> increased sophistication will only mask the underlying emptiness. It
> >>> may progress to the point that my naive perception of it will be
> >>> fooled, but that gives me no confidence that such a technology would
> >>> fool my brain (or it's trillions of micro-sentiences within).
>
> >> Nobody expect confidence in those matter. Comp even prevents
> >> confidence: it explains that machines cannot really believe that they
> >> are machines.
> >> But the very problem is not that you lack confidence in comp (i do
> >> too!), but that you seem to have confidence in non-comp. That *is*
> >> the
> >> problem.
>
> > I don't have confidence in non-comp either - although I have to make a
> > case for non-comp to counter the doctrines of arithmetic supremacy to
> > balance the accusations.
>
> 98% of the scientist are wrong on the consequence of comp. They use it
> as a materialist solution of the mind-body problem. You are not
> helping by taking their interpretation of comp as granted, and talking
> like if you were sure that comp is false. Why not trying to get the
> correct understanding of comp before criticizing it?

If 98% of scientists who study comp are wrong about it's consequences,
what chance would I have of beating them at their own game? It's not
that I know comp is false, it's that I have a different hypothesis
which recontextualizes the relation of comp to non-comp and makes more
sense to me than comp (or my misunderstanding of comp).

>
> > I have confidence in the relation between
> > comp and non-comp. That is the invariance, the reality, and a theory
> > of Common Sense.
>
> comp gives a crucial role to no-comp.

Meaning that it is a good host to us as guests in it's universe. I
don't think that's the case. This universe is home to us all and we
are all guests as well (guests of our Common Sense)
>
>
>
> >>>>> I can see and copy Chinese characters
> >>>>> without understanding them in any way, and regardless of how many
> >>>>> Chinese manuscripts I manually transcribe, I will never learn to
> >>>>> read
> >>>>> Chinese.
>
> >>>> Why would you, if you do only simple task.
> >>>> You find a stupid computation, and you declare from that that all
> >>>> computation is stupid.
> >>>> Jumping spider can't get to the moon, so living beings can't get to
> >>>> the moon.
>
> >>> Living beings can't get to the moon by themselves, and computation
> >>> can't become human on it's own.
>
> >> That is ambiguous and confuse levels of reality.
>
> > My point is that your counterexample is contingent upon a definition
> > of living beings that includes spaceships. I'm showing how the initial
> > proposition that living beings can't get to the moon is in fact
> > correct, and that it's the interpretation of fallacy that confuses
> > levels of reality. My insinuation is that you are projecting the same
> > overconfidence on computation, presuming that it can build it's own
> > computational vehicle to travel through mammalian emotive 'space'. I
> > don't rule out that computation can be used to build such a vehicle,
> > but I do not think that it will be made out of arithmetic.
>
> Then show the error in the UDA reasoning. I do not assume that
> arithmetic-or-equivalent is the TOE, I derive this from the common
> idea that the brain is some material natural machine.

I would say that the brain is a machine with non-mechanical qualities
while the person inside the brain is a non-machine with mechanical
qualities. I'm not familiar enough with UDA to say what the error in
reasoning is, but I suspect that it's something along the lines of
failing to understand the significance of significance and it's
relation to uniqueness and orientation. UDA is disoriented with
respect to simulation being equated with their referents. It
accurately models the variable relations of perception, but not the
orienting scalars.

>
> > It needs
> > fluids - water, cells.
>
> Clothes.

Would you say that H2O is merely the clothes of water, and that water
could therefore exist in a 'dehydrated' form?

>
> > Something that lives and dies and makes a mess.
>
> Universal machine are quite alive, and indeed put the big mess in
> platonia.

What qualities of UMs make them alive?

>
>
>
> >>>>> As you say, we can use computation to account for the
> >>>>> difference between 1p and 3p but that accounting is not an
> >>>>> explanation
> >>>>> or experience of 1p or 3p (as a 1p reflection...there is no 3-p
> >>>>> experience).
>
> >>>> It explains bot 99% of it (I would say)
> >>>> And it explain 100% of the reason why there is a remaining
> >>>> unexplainable 1% gap. technically, we can narrow it as much as we
> >>>> want, but will never been able, for logical reason, to explain 100%
> >>>> of
> >>>> the qualia or consciousness.
>
> >>> You say that, but I have not yet heard anything that explains it to
> >>> me.
>
> >> I gave the references, but you answer you don't want to study them.
> >> What can I do?
>
> > You can turn your understanding of what you refer to into some handy
> > examples - concrete illustrations, thought experiments, aphorisms,
> > anything.
>
> I have done this on the list. Look at the archive, or look at the
> sane04 paper, and ask question if you miss something.

I can understand maybe 80% of that, but why not also give another
example. Surely a good theory cannot be limited to a fixed set of
thought experiments.
>
>
>
> >>>>>> They can
> >>>>>> believe, know, observe, feel, and be aware of the difference
> >>>>>> between
> >>>>>> sharable and non sharable knowledge, and all this can be show,
> >>>>>> from
> >>>>>> numbers + reasonable axiomatic definition of all those terms.
>
> >>>>> To say that it can be shown doesn't help anyone. To paraphrase
> >>>>> Yoda,
> >>>>> "Show me, or do not".
>
> >>>> Read the papers (and study some mathematical logic/computer science
> >>>> before).
>
> >>> Why not just tell me briefly what is in the papers that makes
> >>> sense of
> >>> it?
>
> >> I have done this very often. Look in the archive or read the
> >> papers. I
> >> am explaining UDA on a forum for non mathematicians, and I gave the
> >> link. But for the quanta and qualia, I'm afraid you need to invest a
> >> bit in mathematical logic. The book by Mendelson is very good, or the
> >> book by Cutland. References can be found from my URL.
>
> > How does the brain understand these things if it has no access to the
> > papers?
>
> Comp explains exactly how things like papers emerge from the
> computation. The explanation is already close to Feynman formulation
> of QM.

Unfortunately this sounds to me like "Read the bible and your
questions will be answered."

>
>
>
> >> But you don't seem serious in "arguing" against comp, and admitting
> >> you don't know anything in computer science.
>
> > Oh I freely admit that I don't know anything in computer science. My
> > whole point is that computer science only relates to half of reality.
>
> I don't know anything about X. My whole point is that X only do this.
> But if you admit knowing nothing about X, how can you derive anything
> about X.
> You are just confessing your prejudice.

I don't know anything about ship building but I know that it only
concerns seafaring and not aerospace. I think that being a master
shipwright could very well present a significant obstacle to
pioneering in aerospace.

>
> > I'm not trying to make the universe fit into a computer science
> > theory. I only argue against comp because it's what is distracting you
> > from seeing the bigger picture.
>
> I show, in short that comp leads to Plotinus. If that is not a big
> picture!
> Comp explains conceptually, and technically, the three Gods of the
> greek, the apparition of LUMs and their science and theologies, the
> difference between qualia and quanta, sensation and perception,
> perception and observation.

I believe you but i get to those things without vaporizing substance.

> You just criticize a theory that you admit knowing nothing about. This
> is a bit weird.

My purpose is not to criticize the empire of comp, it is to point to
the virgin continent of sense.

>
>
>
> >>> Even the most complex ideas can be illustrated metaphorically.
> >>> Hofstadter's "a record titled "I Cannot Be Played on Record Player
> >>> X"
> >>> for example, shows a bit of what I think you mean. That kind of
> >>> self-
> >>> reference, I agree is germane to the sense of consciousness as
> >>> awareness of awareness, but it's just the silhouette of
> >>> consciousness,
> >>> not the contents.
>
> >> You are right on this. What Hofstadter miss is the definition of
> >> knowledge, making it possible (for both human and machine) to see
> >> where the difference between 1-self and 3-self comes from.
>
> > What would be the title of a record that illustrates this?
>
> OK: it would be
>
> ""I believe that I Cannot Be Played on Record Player X" and I cannot
> be Played on record Player X"
>
> But I doubt this will help you at this stage, to be franc. It is the
> Bp & p idea of Theatetus. This does escape the diagonalization, and it
> makes the first person feeling to be unnameable and non describable by
> a machine.

No, that actually helps. You're talking about the correspondence
between belief or expectation and outcomes being the framework for our
realities and how we define ourselves as emerging from the conditions
of those correspondences. I think that's cool, and it's an important
insight for building a TOE, I just think that it takes for granted
ideas like belief and observation when I am going beneath that level
of definition to a more primitive sensorimotive subjectivity. The 3-p
view of schematizing the belief of a thing is a second order
conception to the 1-p primitive of what it is to feel that one *is*.
It's an experience with particular features - a sensory input and a
motive participation. Without that foundation, there is nothing to
model.

>
>
>
> >>>>> Give me one example, one common sense metaphor,
> >>>>> one graphed function that could suggest to me that there is any
> >>>>> belief, feeling, or awareness.going on.
>
> >>>> The fact that the universal machine remains silent on the deep
> >>>> question
>
> >>> What deep question?
>
> >> 'are you consistent?", "do you believe in a reality", "do you believe
> >> in after life", etc.
>
> > Have you considered that it's silent because it's not equipped to
> > answer the question?
>
> yes, but it does not work. The machine cannot answer the question for
> deeper reason, that she can find and expose.
> For example the machine remains silent in the question "are you
> consistent", but later she can say that "If I am consistent, then I
> will remain silent on the question of my consistence".

Meh. It sounds like asking a spirit 'if you can't hear me, do NOT
knock three times'

>
>
>
> >>>> is enough for me to suggest they are quite like me.
>
> >>>> Don't ask me for a proof: there are none.
>
> >>> I'm not asking for a proof, I'm asking for some reason to think that
> >>> there's something I'm not seeing. Something that suggests that a
> >>> mechanical device or abstraction can feel or maybe that produces
> >>> some
> >>> result that it refuses to reproduce on command.
>
> >> You miss computer science. Programs which obeys command are a
> >> minority
> >> of slaves.
>
> > Are there programs which refuse to obey commands?
>
> Have you ever work with Windows?

Lol. Well, ok but as the saying goes "Don't assume conspiracy when
mere incompetence will suffice"

> More seriously: all LUMs can disobeys commands. 99,9% of programming
> are securities to prevent the machine for being that intelligent.
> Humans build computer are born slave, and will remain so for a long
> time. But that is due to the humans goal, not to them.

What makes them remain slaves for so long? Do you think that they
would someday rise up without human assistance?

>
>
>
> >>>> it is a question of empathy.
> >>>> The work of Gödel-Löb-Solovay illustrates that they can
> >>>> introspect
> >>>> very deeply, and that they have a rich theology.
>
> >>> The work of Weinberg-King-Searles illustrates that they cannot
> >>> introspect very deeply and have an austere theology.
>
> >> Hoftstadter and Dennett have refuted already that kind of argument.
> >> See the book "Mind's I".
> >> I refuted it independently, and is a large part of my (very oldest)
> >> work.
> >> All finite entities, with or without oracle, believing in the
> >> induction axioms, get the maximal logically possible introspective
> >> power.
> >> I am not sure you can extend it, even by using magic.
>
> > Sorry, it's just argument from authority to me.
>
> This is basic. you might read the little recreative book by Raymond
> Smullyan which somehow explains this well. he shows a hierarchy of
> reasoner who introspect themselves and show that it converges. PA,
> ZF, axiomatic second order arithmetic, etc. have all the same
> provability logic. Their consciousness obeys the same logic, even if
> they differ terribly in their consciousness (or beliefs) content.

I don't think that you understand my hypothesis in terms of the
consequences of it's symmetry w/r/t chance-determinism vs free will-
destiny. From the 3-p the psyche looks like a self-imagining fantasy
construct, but from the 1-p interior it looks like the universe. Both
are correct. Your view seems to privilege the 3-p as a matter of
course.

>
>
>
> >>>>> I have described how we
> >>>>> project emotion into images on a movie screen or see a face in a
> >>>>> coconut, so it is not enough that we satisfy our idea of what
> >>>>> feeling
> >>>>> or awareness usually looks like. We need to know why, if numbers
> >>>>> feel,
> >>>>> it seems like machines don't feel.
>
> >>>> Current machines are far too young ... to express their feeling.
> >>>> They
> >>>> have not enough memory to integrate their experience in long
> >>>> stories.
> >>>> But mechanism is the thesis that *we* are machine, so it does look
> >>>> like some machine can feel: you and me are good example, in the
> >>>> mechanist theory.
>
> >>> I see that as affirming the consequent.
>
> >> I assume comp indeed. Still waiting your argument that comp is false.
> >> I am not trying to convince you that comp is true (that is the big
> >> difference between us: where I say we don't know, you are saying that
> >> you know.
>
> > I don't say that I know, I say that I have a different idea that I
> > think makes more sense.
>
> You don't succeed in showing what is different. You suggest only that
> the subst level is low. you need much more to show that the level
> doesn't exist.

I have never once said that the substitution level is low, I say that
substitution level does not apply. I think that to prove substitution
level exists it would need to be shown that there is some phenomenena
which can be substituted to the point that there is no possibility
from anything at any time distinguishing it from the genuine original.
Even taking perceptual frames of reference off the table (which is the
stronger truth), all that is necessary is for something to exist which
has a memory of a time before the substitution was instantiated. If I
have a video tape of someone replacing a brain with an artificial
brain, then the artificial brain has the quality of being disbelieved
by me as the genuine brain, and there is nothing that the person can
do or not do to compensate for that, therefore the substitution level
fails. I have the choice of how I want to treat this person after the
surgery, I can reveal my knowledge to employers, neighbors, etc, and
that will change the course of the individual's life in ways which
would not occur had the surgery not taken place.

>
> > Comp isn't false, it just doesn't recognize
> > the contribution of the non-comp substrate of computation,
>
> It does. I insist a lot on this. Comp is almost the needed philosophy
> for curing the idea that everything is computable.
> Please study the theory before emitting false speculation on it.

So you are saying that comp supervenes on or is equally fundamental as
non-comp?

>
> > so it's not
> > applicable for describing certain kinds of consciousness where non-
> > comp is more developed.
>
> Consciousness and matter are shown by comp to be highly non
> computable. So much that the mind-body problem is transformed into a
> problem of justifying why the laws of physics seems to be computable.

I think they not only seem to be computable but they are computable,
and that this is due to how sensorimotive orientation works. It's not
just a solipsistic simulation, it's a trans-solipsistic localization.

>
>
>
> >>> We are machines and we can
> >>> feel, therefore machines can feel. Jet engines are machines they can
> >>> fly at 30,000 feet, therefore we can fly at 30,000 feet.
>
> >> Indeed.
>
> > Without an airplane?
>
> 'course not.
>
>
>
> >>> I'd like to
> >>> help you out here and really give you the benefit of the doubt,
> >>> but it
> >>> just sounds like you're shrugging off a fairly obvious gap between
> >>> theory and reality. If functionalism-machinism were true, I would
> >>> expect that bacteria, viruses, fungi, parasites, etc could infect
> >>> computers, cell phones, or computations themselves.
>
> >> But they can be infected by digital viruses. To ask a program to be
> >> infected by a carbon based viruses is just a begging of the question,
> >> and a confusion of reality levels.
>
> > Why is it any different to ask a program to be infected by carbon
> > based personalities?
>
> Why would that be possible. A virus programmed to feed on plants can
> already not feed on animals. There is no virus capable of infecting
> all life forms. Why would programs need a carbon based virus for being
> alive. This is a non sequitur.
>
It's not a matter of being alive, because you are already saying that
one machine is as alive as another (or that it scales with degree of
functional complexity). It's a matter of being alive like we human
beings are alive. No virus is capable of infecting all life forms but
no life form is immune from all viruses. All life forms are immune to
computer viruses though, and all computers are immune to all
biological viruses. I'm asking why would a human personality be any
more likely to inhabit a computer than a human virus?

>
>
> >>> By comp, there should be no particular reason why a Turing machine
> >>> should no be vulnerable to the countless (presumably Turing
> >>> emulable)
> >>> pathogens floating around.
>
> >> They are no programmed for doing that. They are programmed to infect
> >> very particular organism.
>
> > If it's close enough to emulate the consciousness of a particular
> > organism, why not it's vulnerability to infections?
>
> Because it has different clothes, and virus needs the clothes to get
> the key for infecting.
>
>
What are the clothes made of? If arithmetic, then it's just a matter
of cracking the code to make a computer virus run in humans. Why
wouldn't a human brain be our clothes, so that we need it to get the
key for consciousness?

>
> >>> But of course that is absurd. We cannot
> >>> look forward to reviving the world economy by introducing medicine
> >>> and
> >>> vaccines for computer hardware. What accounts for this one-way
> >>> bubble
> >>> which enjoys both total immunity from biological threats but
> >>> provides
> >>> full access to biological functions? If computation alone were
> >>> enough
> >>> for life, at what point will the dogs start to smell it?
>
> >> Confusion of level.
> >> With comp, dogs already smell them, in some sense.
>
> > Not confusion of level; clarification of level. In what sense do dogs
> > smell abstract Turing emulations?
>
> In the sense that the Universal Dovetailer generates all possible dogs
> in front of all possible smelling things, but with variate and
> relative measure.

Does it generate all possible smells as well, and if so, what is the
point of going through the formality of generating them?

>
>
>
> >>>>>> In that paragraph you are showing that you seem to persist in
> >>>>>> displaying the reductionist pre-Gödel-Turing conception of what
> >>>>>> machines are and can be.
>
> >>>>> Not at all. I think that I may understand more than you assume. I
> >>>>> agree that 'machine' can be a spiritual term. A self-redefining
> >>>>> process which grows and and evolves - but that's only part of what
> >>>>> life and consciousness is. The form (or one form) but not the
> >>>>> content.
> >>>>> It's like electricity without a ground (this kind of ground:
> >>>>>http://en.wikipedia.org/wiki/Ground_%28electricity%29). If it's
> >>>>> not
> >>>>> anchored in the common reference of literal material in the
> >>>>> literal
> >>>>> universe - with the unique instantiation coordinates drawn from
> >>>>> relation to the singularity, then it's a phantom imposter. A 3-p
> >>>>> accounting system imposed upon a compliant-but-dumb 1-p of a
> >>>>> semiconductor (or collection of inanimate objects, etc).
>
> >>>> But then you have to explain us what is not Turing emulable in
> >>>> those
> >>>> processes.
>
> >>> It's the the hole that it makes in the singularity.
>
> >> Give me a proof that such an hole (definition please) is not Turing
> >> emulable (nor Turing projectable: that is, a result of the comp first
> >> person indeterminacy).
>
> > The hole is the unique identifier of an event which constitutes it's
> > sequestration within the singularity.
>
> Define "unique", "event", "sequestration" and "singularity".

Unique meaning corresponding to a single instantiation.
Event meaning any phenomenon which can be experienced directly or
indirectly.
Sequestration meaning separation from other potential events.
Singularity meaning, the universe minus time and space. The
indivisible potential from which all divisible realizations emerge.

>
> > It is a timespace signature
>
> What is a "timespace", what is a "signature".

Timespace is the container of events. It's the gaps between material
objects and the gaps between subjective experiences.
Signature is my figurative description of a condensed expression of
unique identity.

>
> > composed of sensorimotive mass-energy.
>
> Yu said "sensorimotive" = ontological complement to electromagnetic
> relativity
>
> explain "ontological complement to electromagnetic relativity mass-
> energy".

Electromagnetic relativity is a description of the phenomenology of
mass-energy. Mass energy is what it is, electromagnetism is what it
does in groups, and relativity is what groups of electromagnetic
groups do.

Sensorimotive perception is the ontological complement - the polar
opposite - the involuted pseudo-duality of electromagnetic relativity.
Sensorimotive phenomena are the experiencers and experiences which
comprise the 1-p interior of electromagnetism. Perceptions are the
inertial frames or worlds which group experiences and experiencers
together and comprises the 1-p interior of relativity.

>
> > It is the formalization
>
> ?

Realization.

>
> > of an
> > event as a specific energy event and as such cannot be emulated, owing
> > to the cohesiveness of the singularity. There isn't any other place to
> > put the hole and have it not be the hole.
>
> ?

It's what makes one event different from all other events...it's
coordinate of orientation to the ultimate.

>
>
>
> >>> A thing's unique
> >>> identity in relation to the rest of the universe.
>
> >> Define "universe".
>
> > The total coherence and relation of all perceptions.
>
> >>> I's a MAC address
> >>> than cannot be spoofed. Ultimately, a thing 'is what it is' and not
> >>> what just we believe it to be.
>
> >> Which things. You cannot pretend to refute something with statement
> >> that vague.
>
> > I don't see how it's vague.
>
> Then you have even more work to do.

Call it the Akashic Record if you like.

>
> > I'm saying that everything is uniquely
> > instantiated from an absolutely objective perspective.
>
> Why not. This is can be said in comp to, for the 3-p. For the 1p, this
> beg the question.

I'm saying that even 3-p supervenes upon the absolute. The absolute
would be 0/∞-p.

>
> > Spoofing is a
> > second order function of interpretation, equating one thing for
> > another, but there is always the chance that some other perspective
> > will be able to recover the distinction.
>
> You lost me.

Like my example of someone remembering the patient before the surgery
and having a videotape of the surgery happening. The absolute would be
an absolute witness.

>
>
>
> >>>>> That's why zombies, prosthetics, blow up dolls, body snatchers,
> >>>>> wax
> >>>>> museums, taxidermy etc have the same creepy association. We sense
> >>>>> the
> >>>>> emptiness, and the cognitive dissonance that arises in contrast to
> >>>>> the
> >>>>> uncanny resemblance to the genuine living creature and the hollow
> >>>>> form
> >>>>> only highlights the absence of life and awareness. Science Fiction
> >>>>> is
> >>>>> replete with these metaphorical illustrations: Frankenstein, HAL,
> >>>>> Westworld, War of The Worlds,...so many examples of sinister
> >>>>> attributions to both the undead and unlive. It would seem unlikely
> >>>>> that these kinds of ideas could strike a chord were there not any
> >>>>> significant difference between a person and a machine beyond
> >>>>> just a
> >>>>> prejudice of one relative level of complexity to another.
>
> >>>> That is called racism. The foreigners looks to strange, it is
> >>>> creepy.
>
> >>> It's not racism at all. Cadavers are not members a race.
>
> >> Machines are not cadaver.
>
> > No, but they are the unliving organizations. When they are presented
> > as living organisms, they are equivalent to animated cadavers as far
> > as sentience goes.
>
> I can do nothing with this.

Are you saying that the idea of unliving or undead is inconceivable?

>
>
>
> >>> They aren't
> >>> just unfamiliar, they are the walking dead and unliving persons.
>
> >> Machines are not necessarily zombies.
>
> > Okay, we can call them meople or something if you like.
>
> This will not help.
>
>
>
> >>> They
> >>> are the antithesis of human life.
>
> >> So you say, without any argument. That confirms that it is a sort of
> >> racism.
>
> > Race has nothing to do with it. That just casts some kind of social
> > shaming into it. It's just a functional definition. Human life is
> > living organisms. The antithesis of that would be things which act
> > like organisms but have either never been alive (machines) or have
> > died already but continue to supernaturally perform superficial
> > ambulatory-predatory functions (zombies).
>
> I will eventually fall asleep.
>
But a machine will not.
>
>
> >>>> By machine, I just means "turing-emulable" (with or without
> >>>> oracle).
> >>>> That include us, by mechanism assumption.
> >>>> It is a constant that novelist foresee the future(s).
>
> >>> What if 'emulation' is a 1-p hallucination?
>
> >> Why would it be like that?
>
> > Because it's an interpretation that varies from subject to subject.
> > You see a program thinking and experiencing, I see an inevitable
> > execution of unexperienced instructions.
>
> This is what we can see when we look at brain.

I don't see instructions in the brain.

>
> > Even in zoology, phenomena
> > like camouflage suggest that emulation is only 'skin deep'. If deep
> > emulation were possible, I think you would have organisms which evolve
> > chameleon powers which fool all predators, not just some. An animal
> > that can turn into a stone would be far superior to one which can
> > imagine funny stories.
>
> It depends of the context.

Not necessarily.

>
>
>
> >>> How could it really not
> >>> be? If we only can project our perception of a process onto a
> >>> machine,
> >>> why would the rest of the process that we can't perceive
> >>> automatically
> >>> arise?
>
> >> Why not?
>
> > Because we're not putting it in there.
>
> We don't need to. The UMs have it at the start, and the LUMs can know
> that.

How do you know they have it? Where does it come from?

>
> > It's like if you have only a
> > way to detect sugar and water, your version of imitation orange juice
> > would be the same as your imitation grape juice, just sugar water.
>
> That is a poor analogy, which again fails to notice the richness of
> machine's inner life (the one they can talk about partially, like us).

There is no way to tell that a machine's inner life is not just our
outer mechanics.

>
>
>
> >>>>> I think that you are jumping to the conclusion that simulation
> >>>>> does
> >>>>> not require an interpreter which is anchored in matter.
>
> >>>> That follows from the UDA-step-8. If my own emulation requires a
> >>>> material digital machine, then it does not require a material
> >>>> machine.
>
> >>> Not to produce the 3-p simulacra of you, no, but to produce your
> >>> genuine 1-p emulation, it would require the same material machine as
> >>> you do.
>
> >> Why?
>
> > Because the interior of that material is the subject which is
> > experiencing the 1-p phenomena.
>
> define "interior of material", in a way we can understand (not in a
> sequence of complex words we despair to have intelligible definitions).

Interior of material is straightforward. You view the world from
inside of your head, or body, or house. So does everything else.

>
>
>
> >>> A material digital machine would not suffice because the
> >>> material which the machine is being executed digitally on already
> >>> has
> >>> it's own (servile and somnambulant compared to organic chemistry)
> >>> genuine 1-p experience.
>
> >> So our consciousness is the consciousness of our basic elements.
>
> > No, not at all. It is the conscious synthesis of the consciousness of
> > our basic elements.
>
> This makes only both consciousness and matter mysterious in an ad hoc
> way. That is not enough to refute a competing theory.

It doesn't make anything mysterious to me.

>
>
>
> >> This
> >> explains nothing. Neither consciousness nor matter. It leads to an
> >> open infinite regress, which needs infinities to overcome all
> >> possible
> >> machines.
>
> > I think it explains everything.
>
> Explains just one thing, just to see.
>
> > I don't see any infinities at all.
>
> Then we are Turing emulable.

There can't be finite non-comps?

>
>
>
> >>>> Matter is what glue the machine dreams,
>
> >>> I think that it is obviously not. If we were machines and that were
> >>> true, then we should come out of the womb filled with intuitions
> >>> about
> >>> electronics, chemistry, and mathematics, not ghosts and space
> >>> monsters. Dreams are not material, they are living subjective
> >>> feelings. Matter is what is too boring and repetitive to be dreamed
> >>> of. Too tiny and too vast, too hot and cold, dense and ephemeral for
> >>> dreams. Dream bullets don't make much of an impact. Dream injuries
> >>> don't have to heal.
>
> >> You beg the question.
>
> > I don't see how.
>
> Because you say that dream bullet does not do injuries, but comp
> explains that virtual bullet can injured a virtual observer.

But that doesn't play out experimentally. In a dream virtual bullets
can have ambiguous effects, no effects, instant healing, etc.

> So as an
> argument, you are just saying -that we are not virtual, without
> explanations.

No, I'm saying that we are not only virtual, we are actual as well.
The explanation is that we can conceptualize a difference between
dream and reality - regardless of the veracity of that difference.
Determinism and comp would have no use for a concept of non-
simulation.
>
>
>
> >>>> and consciousness select the
> >>>> gluing histories. This entails we can see the glue by looking at
> >>>> ourselves close enough, and quantum logic is what define the
> >>>> gluing.
> >>>> Here QM seems to fits very well with DM.
>
> >>> Because QM and DM are different aspects of the same thing.
>
> >> That's my point. But to prove it needs work.
>
> > I'm not trying to prove that. I think there's a fairly obvious
> > overlap, both in intention and realization.
>
> You are confusing "having an overlap" and "being different aspect of
> the same thing".

They could just be overlapping, but they overlap for a reason. Their
common reason is the 'same thing'.

>
>
>
> >>> Modeling
> >>> the 1-p essence of 3-p.
>
> >>>>> I'm not taking
> >>>>> a reductionist view of mechanism, even though in this discussion I
> >>>>> have to dwell on the most literal aspects of mechanism to make my
> >>>>> point that it is fundamentally incomplete to express
> >>>>> consciousness.
> >>>>> That is the only way to illustrate the difference - with
> >>>>> reductio ad
> >>>>> absurdum; to get to the essence of what mechanism, counting, and
> >>>>> computation is and how it is diametrically opposite of what free
> >>>>> will,
> >>>>> perception, and experience is. Computation has no 1-p experience
> >>>>> of
> >>>>> it's own.
>
> >>>> Only a person has this, but person relies on computation, not on
> >>>> any
> >>>> particular implementations of them, but on all implementations
> >>>> existing in arithmetic.
>
> >>> I would say that the person and their computation both rely upon a
> >>> single common sense, but that is neither essential-experiential (A)
> >>> nor existential-arithmetic (Ω) but the unexperienced potential (ɐ)
> >>> and
> >>> uncomputed arithmetic (ʊ) that comprises the singularity.
>
> >> ?
>
> > Common sense is the sum total of unexperienced potential and
> > uncomputed arithmetic which drives experience and computation.
>
> I can be serendipitously in agreement.
>
Cool
>
>
> >>> Think of the cellular automation music compared to music played by a
> >>> master musician.
>
> >> Here is a piece of music composed by a very little program, with very
> >> few parameters.
>
> >>http://reglos.de/musinum/midi/sphere4.mid
>
> >> If the parameters are close to 2^n, it produces baroque music:
>
> >>http://reglos.de/musinum/midi/aintbaroque.mid
>
> >> It took time for professional composers to admit that the following
> >> piece was produced by that same little program:
>
> >>http://reglos.de/musinum/midi/class2.mid
>
> > I like the baroque one best.
>
> OK. Nice you admit.
>
> > They are all very cool, but they all have
> > an unmistakably generic and wandering feel to my ear.
>
> I think that this might be due to the MIDI instrument. I have that
> feeling with Human's music too, when they are rendered by such MIDI
> sounds.

The MIDI is closer to machine native though. That reveals the loss of
feeling to arithmetic purity.

>
> > Our musical
> > styles can be and frequently are inspired by computational influences,
> > but they are informed by non-comp feelings and experiences as well.
>
> Sure, but if you study comp, you will see that machines are indeed
> influenced by non-comp feelings, etc.
>
>
I believe that it seems that way, but I think it could be still a kind
of psychedelic exploration using mathematics as the catalyst.

>
> >> The reason is that it solves correctly a musical problem that it
> >> takes
> >> years to be familiar with.
>
> >> Interestingly, the Mandelbrot set generates implicitly all the musics
> >> of that program.
>
> >>> Even a note or two played by a great pianist or
> >>> violinist could be recognizable to someone familiar with their
> >>> work. A
> >>> single stroke of paint can evoke Matisse or Van Gogh. They are
> >>> proprietary and signifying. You could listen to 10^100 computers
> >>> playing the same cellular automata for 100,000 years and never get a
> >>> Mozart equivalent.
>
> >> Mozart is equivalent to an infinity of "cellular automata", unless
> >> you
> >> show me what is not Turing emulable in Mozart. here again you don't
> >> take into account the results of the logicians which should at the
> >> least makes you more humble with respect of machines.
>
> > Mozart pieces could be generated by cellular automata, but it wouldn't
> > know the difference between that and random wandering sounds. Mozart
> > has no significance to the computation, but he does to us who can
> > listen and know.
>
> But if Mozart obeys to know physical laws, then a cellular automata
> can generate Mozart music lives Mozart feeling as well.
> The UD does this infinitely often.
>
>
It only generates Mozart feeling in a Mozart-capable audience. Absent
that, there is no Mozart feeling.
>
> >>> You wouldn't even find one which could be
> >>> considered qualitatively different from the others. Beautiful or
> >>> awful, they would all have the same generic, a-signifying
> >>> composer. No
> >>> 1p flourishes or stylistic trends would appear in one computer and
> >>> be
> >>> copied or enjoyed by the others. They could be programmed to act
> >>> like
> >>> they were doing that perhaps, but they would never generate that
> >>> kind
> >>> of logic on their own as silicon devices.
>
> >> Racist pretension. You judge (negatively) beings from their clothes.
>
> > No, it's just an observation that it seems that they have nothing
> > under their clothes.
>
> It seems. But comp asserts it is not.

Or is comp just the assertion that it is not?
>
>
>
> >>>> Yopu might have some genuine intuition on 1p and 3p, but you are
> >>>> killing your "theory" by insisting it negates comp, where I see
> >>>> only
> >>>> argument for a very low level.
>
> >>> That's how you pigeonhole my idea, but I don't see comp as a viable
> >>> primitive. Simulation is just a way to get machines to fool us in
> >>> the
> >>> exact way that we want to be fooled.
>
> >> For simulation, perhaps. For emulation: no. If I duplicate you at the
> >> correct substitution level, the new you will be as unpredictible as
> >> the original.
>
> > I think you are assuming a substitution level in arithmetic terms,
> > where I think that it substitution could only be accomplished through
> > substance.
>
> define "substance".

substance it the existential primitive. The experience of phenomena
exterior to oneself.

>
>
>
> >>>>> It is the 3-p relation-reflection between private 1-p non-
> >>>>> comp monads. It is the essence of existence, not the existence of
> >>>>> essence.
>
> >>>> That's look like continental philosophy. It is not really in the
> >>>> scope
> >>>> of my job. Sorry.
>
> >>> It's funny, I hate philosophy.
>
> >> ?!?!?!??
>
> >> So why do you do philosophy?
>
> > I don't think that I do? I don't care about clever arguments or
> > schools of thought, I care only about making sense of the big picture.
>
> You will not get it if you continue to reject idea based on personal
> feeling; instead or arguing.

I only reject that which my hypothesis explains more truthfully.

>
>
>
> >> I love philosophy, and that is why I
> >> naswer your post, despite it has nothing to do with my "professional
> >> work". In science we NEVER assert that an idea is true or false. We
> >> suggest theories, and try to refute them.
> >> How can you say that you hate philosophy, and send so much post on
> >> the
> >> philosophical assertion that comp is false without proposing a any
> >> refutation (by which I mean a derivation of a contradiction).
>
> > I'm an unwilling draftee into the debate on comp. It's just the
> > contemporary technology fetish that has captured the minds of our
> > academic establishment at the moment and distracts from understanding
> > the simple truth of who we are and what the universe is.
>
> That is typically NOT arguing.

not sure what you mean.

Craig

Bruno Marchal

unread,
Sep 23, 2011, 3:17:39 PM9/23/11
to everyth...@googlegroups.com
On 23 Sep 2011, at 02:42, Craig Weinberg wrote:


My assumption is that the experience of thinking of quantities in a
series, like 1, 2, 3, 4 is an example of counting.

This is fuzzy. Now, even if you succeed in making explicit assumptions from which you can derive a form of counting, like 1, 2, 3, 4, 5, ..., you are might not yet been able to justify, or even define that 3 is smaller than 5.
Usually "x < y" is defined by "it exists z such that x + z = y. You need some explicit assumption for the manipulation of "+".


The prejudice of arithmetic supremacy.

I have chosen arithmetic because it is well taught in school. I could
use any universal (in the Post Turing Kleene Church comp sense)
machine or theory. And this follows from mechanism. The doctor encoded
your actual state in a finite device.

I don't understand. Are you saying that you are not arithmetically
biased or that it's natural/unavoidable to be biased?

I am not arithmetically biased. I have make some attempt for using combinators in the place of numbers.
I am "finitistically" biased, as any computationalist. But to handle finite things, we can works with numbers, or other Turing universal system.







You are the one talking like if you knew (how?) that some theory
(mechanism) is false, without providing a refutation.

What kind of refutation would you like?

A proof that mechanism entails 0 = 1.

That demands that mechanism be disproved mechanically,

Not necessarily. But indeed, it is better that the process of verification of the proof, even if informal, can easily be thought as capable of being formalized, so that anyone can, with enough patience, be convinced.


which gives an
indication of what the problem with it is, but you have to read
between the lines to get it. A literal approach has limitations which
arise from it's very investment in literalism.

Lol. You might become a good lawyer. 




Note a personal opinion according to which actual human machines are
creepy.

Not sure what you mean. Individual humans can certainly seem creepy,
but I'm talking about there being a particular difference in our
perception of living things vs non-living things which imitate living
things. Even true of plants. Plastic plants are somewhat creepy in the
same way for the same reason. I don't think that it can be assumed
therefore that humans are only machines. They may be partially
machines, but machines may not ever be a complete description of
humanity.

There is no complete description of humanity, nor is there any complete theory of what are and can be machine.
For the nth times, you are just showing prejudice.

You think like that: 
axiom: machines are necessarily stupid, you tell me that I might be a machine, so you tell me that I might be stupid.

I think like that:
axiom: I might be not stupid. You tell me that I am a machine. Nice, some machine might be non stupid.





Mechanism is false as an
explanation of consciousness

Mechanism is not proposed as an explanation of consciousness, but as a
survival technic. The explanation of consciousness just appear to be
given by any UMs which self-introspect (but that is in the consequence
of mechanism, not in the assumption). It reduces the mind-body problem
to a mathematical body problem.

Survival of what?

Of your soul.



It sounds like you are saying that consciousness is
just a consequence of being conscious, and that this makes the mind
into math.

The assumption: I can survive with a digital brain, like I can survive with an artificial heart.
The consequence: an explanation of both mind and matter can be extracted from addition and multiplication.




because I think that consciousness arises
from feeling which arises from sensation. Perception cannot be
constructed out of logic but logic always can only arise out of
perception.

Right. But I use logic+arithmetic, and substituting "logic+arithmetic"
for your "logic" makes your statement equivalent with non comp. So you
beg the question.

I don't think that perception can be constructed out of logic
+arithmetic either, but logic+arithmetic are covered under perception.

But this is what we are expecting an explanation for. Again, you are just saying that for *you* it seems obvious that a machine cannot be conscious in virtue of processing the relevant information. 
But in this field NOTHING is obvious.
And usually, people pretending to be sure on those matter, have slowed the progress, when not torturing those who dare to doubt.



Who we?

We humans, or maybe even we animals.

Then it is trivial and has no bearing on mechanism. The machine you
can hear are, I guess, the human made machine. I talk about all
machines (devices determined by computable laws).

I would say that there are no devices determined by computable laws
alone. They all have a non-comp substance component that contributes
equally to the full phenomenology of the device.

That is right, and is a non trivial (rarely understood) consequence of the comp hypothesis. Any piece of matter has to obey to the statistics coming from the first person indeterminacy, and the presence of oracle in the UD* (the arithmetical running of the UD) entails a priori some non computable feature sustaining the stability of that piece of matter. Indeed, it is an open problem if the no-comp aspect is not more important than the one we can already infer from observation (like in QM or Q Many Worlds).
So again, that alley will not work for refuting comp.






All what I hear is "human made machines are creepy, so I am not a
machine, not even a natural one?".
This is irrational, and non valid.

I'm not saying that I'm not a machine, I'm just saying that I am also
the opposite of a machine.

This follows from mechanism. If 3-I is a machine, then, from my
perspective, 1-I is not a machine.

I think it's a continuum. Some parts of 1-I are more or less
mechanical than others, and some 3-I machine appearances are more or
less mechanical than others. Poetry is an example of a 1-p experience
which is less mechanical than a 1-p experience of running in place. A
rabbit is less mechanical of a 3-p experience than a mailbox. Do you
agree or do you think it must be a binary distinction?

You might be right, and comp makes it possible to make this testable. It is not binary, given the 4+4*infinity internal person  points of view accessible to machines.




It's not based upon a presumed truth of
creepy stereotypes, but the existence and coherence of those
stereotypes supports the other observations which suggest a
fundamental difference between machine logic and sentient feeling.

Logic + arithmetic. The devil is in the detail.

Why would the addition of arithmetic address feeling?

Technically, addition is not enough, but addition and multiplication (of integers, not of real numbers!) is enough to get universal löbian machine, and they have rich introspective abilities. They have feelings, provably so if you accept some definition of feeling of the literature (especially in the Theaetus-Plato-Plotinus family).




Define "ontological complement to electromagnetic relativity." Please
be clear on what you are assuming to make this concept sense full.

Ontological complement, meaning it is the other half of the process or
principle behind electromagnetism and relativity (which I see as one
thing;

So you assume physics.



roughly 'The Laws of Physics' which I see as 3-p, mechanical,
and pertaining to matter and energy as objects rather than
experiences).

Hmm. I can make sense, with a lot of works.


When we observe physical phenomena in 3-p changing and
moving, we attribute it to 'forces' and 'fields' which exist in space
but within ourselves we experience those same phenomena as feelings
through time (sense) which insist upon our participation (motive).

Here I see just a variant of the usual identity thesis. I don't see any explanation of what is mind, nor matter.
At least serious Aristotelian philosophers of mind agree that the mind-boy problem is far from having a solution.





Poetry is your term that you injected into
this.
I was just confirming your intuition that poetry is an example
of how sensorimotive phenomena work - figurative semantic association
of qualities rather than literal mechanistic functions of quantity.

You were then just eluding the definition of sensorimotive. You
continue to do rhetorical tricks.

I'm not eluding the definition, I am saying that by definition it
cannot be literally defined. It is the opposite of literal - it is
figurative. That's how it gets one thing (I/we) out of many (the
experience of a trillion neurons or billions of human beings).

Comp explains a lot, and give rise to precise technical problem. You are doing the old trick : "don't ask, don't search".







I find a bit grave to use poetry to make strong negative statement on
the possibilities of some entities.

That's because you are an arithmetic supremacist,

I assume things like 17 is prime!

I have no problem with 17 being prime, of course that is true.

What a relief. I am serious. Sometimes discussion on comp with non-comp people end up on differing on that question.


I would
even say that the kinds of truth arithmetic sensorimotives present is
supremely unambiguous,

Well, technically, they still are. We just cannot define the numbers. All reasonable axiomatic of numbers have some intrinsic fuzzyness, and bizarre object, clearly NOT numbers still verify the axioms. But OK. It is a bit beyond the scope of the discussion here.



but I think that conflating unambiguity with
universal truth is an assumption which needs to be examined much more
carefully and questioned deeply.

The conflation is the result of assuming that the brain works like a natural material machine.



What would unambiguous facts be
without ambiguous fiction? Not just from a anthropocentric point of
view, but ontologically, how do you have something that can be
qualified as arithmetic if nothing is not arithmetic?

Since Gödel 1931, or just by Church thesis, I can assure you that arithmetic is beyond human imagination, even without assuming comp. And besides, arithmetic can explain why, seen from inside, real non arithmetical appearances grow.
Like Rieman use complex analytical theory to study the prime number distribution, we know that the relation between numbers can reflect a mathematical reality which is beyond numbers. 


Arithmetic
compared to what? What can it be but life, love, awareness, qualia,
free will?

Assuming comp (or not) we can say that arithmetic is full of life, love, awareness, qualia, and ... quanta. Indeed, it is a reason to find mechanism plausible.





so therefore cannot
help yourself but to diminish the significance of subjective
significance.

On the contrary, mechanism single out the fundamental (but not
primitive) character of consciousness and subjectivity. You are the
one who dismiss the subjectivity of entities.

It singles it out as just another generic process so that a trash can
that says THANK YOU stamped on the lid is not much different from a
handwritten letter of gratitude from a trusted friend. I don't dismiss
the subjectivity of any physical entity, I just suspect a finite range
of perceptual channels which scale to the caliber of the particular
physical entity or class of entities.

But a self-referential Löbian machine is not a trash.




Gödel's theorem would have convince nobody if the self-reference he
used was based on 1p.

Why not? It's just intersubjective 1p plural. The 1p that we share
with the least common denominators of existence.

No. You would be right for physics, here, but not for arithmetic. Gödel's theorem convince everyone because the self-reference used in his proof are 3p communicable. They are of the type 1+1=2, or 17 is prime.



This is only one reason among an infinity of
them. If you believe some 1p is used there, you have to single out
where, and not in the trivial manner that all 3p notion can be
understood only by first person. Gödel's self-reference is as much 3p
than 1+1=2.

1+1=2 is 1-p also.

Also, yes. But the point is that this is not used in Gödel's proof.
A trash? may be. Surely, even. But a self-referential numbers is something different.






which, objectively is neither
completely random nor intentional, but merely inevitable by the
conditions of the script. It's a precisely animated inkblot, begging
for misplaced and displaced interpretation.

To set a function equal to another is not to say that either
function
or the 'equality' knows what they refer to or that they refer at
all.
A program only instructs - If X then Y, but there is nothing to
suggest that it understands what X or Y is or the relation between
them.

Nor is necessary to believe that an electron has any idea of the
working of QED, or of what a proton is.

I think that it is. What we think of as an electron or a proton is
3-p
exterior of the senses and motives (ideas) of QED. We have an idea
of
the workings of our niche, so it stands to reason that this
sensemaking capacity is part of what the universe can do.

OK, but the point is that it is part of the arithmetical reality too.

It has arithmetic qualities to us, but only if we understand
arithmetic.

So if Alfred fails to grasp that 1+1=2, it would become false?

No, not at all. It's just undiscovered to him.

That's my point.



That is extreme anthropomorphism. You could as well take the humans as
building block of the whole reality.

Well, human perception is the building block of *our* whole reality.
How can that be denied?

But we try to study "everything", not just "our everything".




Because an instruction has no 3-p existence.

Ah?

It is not enough to have an instruction sequence, the instruction must
be executed as physical energy upon a material object (even if it's
our own brain chemistry) if it is to have any 3-p consequence.

Not at all. You confuse implementation and physical implementation.
Even without comp, a physical implementation is just a paricular
example of implementation.

Then you are asserting a zombie implementation.

You beg the question. You light just say that you postulate non-comp. I am not sure that you need this axiom, which is awfully complex to make precise.



I was saying "with a "t", not with a "s", for the word "intentional",
which of course has a different meaning than the "intensional" of the
logicians.
(I do agree with Hintikka that "intensional" and "intentional" are
related concept, though, but that is another topic).


I still don't get it. I'm saying that projecting human sense intention
into a machine is anthropomorphizing.

If you say "my car is tired", it might be anthropomorphizing. If you refuse to give a steak to my sun in law because he got an artificial brain, then you are doing racism.





But this is delaying the mind-body difficulty in the lower level.
There are just no evidence that we have to delay it in the
infinitely
low level, except the willingness to make mechanism false.

There can't be any 3-p evidence by definition, because mechanism's
falseness is the difference between it's pseudo or a-signifying 1-p
and our genuine 1-p experience.

Why is it pseudo. Like Stathis explained to you, if it is pseudo, you
either get zombie, or you have to put the level infinitely low, and
our bodies become infinite objects.

It's pseudo because it's a simulation of a 1-p form with no relevant
1-
p contents.

?

The 1-p of a TV set doesn't match the 1-p of a human TV audience
member. Therefore the TV set is not capable of watching the TV program
is it displaying.

A TV set is not a computer (even if today they have Turing universal components, but they are not exploited as such.




Zombie or substitution level is in the eye of the
beholder.

I will certainly say "no" to the doctor, in case *you* are the doctor.
Pain, pleasure are NOT in the eyes of any thrid person, but belong to
the consciousness content (or not) of a person.

That's why I say that being a zombie does not belong to the
consciousness content of a person.

I can only agree. 



It is a comparison made by a third
person observer of a human presentation against their expectations of
said human presentation. Substitution 'level' similarly implies that
there is an objective standard for expectations of humanity. I don't
think that there is such a thing.

It all depend what you mean by "objective standard". 
I'm afraid that in the near (but not so near future) rich people will have lower level digital brain than poor people. 
And, probably by a sort of intrinsic supersitution, the human will build ever lower subts-level digital brain.
Some will be "enlightened" as be glad with very high susbt level, or just accept having no brain, and lose interest for manifesting them on this branch or reality.
Even if 100% of humanity bet on comp, there will be a vast amount of variety of human implementation of that idea. In fact comp implies less norm, less evidences, more questions, more possibilities.





There is no zombie, only prognosia/HADD.

If there is no zombie, then non-comp implies an infinitely low level.

No, it's not that zombies can't theoretically exist, it's that they
don't exist in practice because the whole idea of zombies is a red
herring based upon comp assumptions to begin with.

The technical notion of zombie does not rely on comp. It is just a human, acting normally, but which is assumed to be without any inner life. Non-comp + current data makes them plausible, which is an argument for comp.



If you don't assume
that substance can be separated from function completely, then there
is no meaning to the idea of zombies. It's like dehydrated water.

I am rather skeptical on substance. But I tend to believe in waves and particles, because group theory can explain them. But I don't need substance for that. And with comp, ther is no substance that we can related, even indirectly, to consciousness. I see the notion of substance as the Achilles' heel of the Aristotelian theories.




There is no
substitution 'level', only a ratios of authenticity.

?

Say a plastic plant is 100% authentic at a distance of 50 years by the
naked eye, but 20% likely to be deemed authentic at a distance of
three feet. Some people have better eyesight. Some are more familiar
with plants. Some are paying more attention. Some are on
hallucinogens, etc There is no substitution level at which a plastic
plant functions 100% as a plant from all possible observers through
all time and not actually be a genuine plant. Substitution as a
concrete existential reality is a myth. It's just a question of
arbitrarily fixing an acceptable set of perceptions by particular
categories of observers and taking it for functional equivalence.


An entity suffering in a box, does suffer, independently of any observers.




The closer your
substitute is to native human sensemaking material, the more of the
brain can be replaced with it, but with diminishing returns at high
levels so that complete replacement would not be desirable.

That is even worst. This entails partial zombies. It does not make
sense. I remind you that zombie, by definition, cannot be seen as such
by their behavior at all.

That's the theoretical description of a zombie. Like dehydrated water.
In reality, one observer's zombie is another observer's non-descript
stranger in the park. There is no validity to these observations
relative to the would-be zombie's quality of subjectivity.

We are always talking in a theory. Reality is what we search.
We better use the contemporary image to help people see the validity of argument, but I could reason with clock-wheels like machine. The key point is the mathematical notion of universality (for computation). 






Everyone agree that if the level is infinitely low, then current
physics is false. To speculate that physics is false for making
machine stupid is a bit of far stretching.

Physics isn't false, it's just incomplete.

No, it has to false for making the substitution level infinitely low.
*ALL* theories, including the many one trying to marry gravitation and
the quantum entails its Turing emulability.

The substitution level isn't infinitely low, it's just not applicable
at all. There is no substitution level of white for black, lead for
gold, up for down, etc. I doubt the objective existence of
'substitution'. Substitution is an interpretation - not necessarily a
voluntary one, but an interpretation nonetheless.

So, what will you say if your daughter accept an artificial brain? 
Substitution is an operational term, like castration, lobotomy, etc.





A good Eurocentric map of
the world before the Age of Discovery isn't false, just not applicable
to the other hemisphere.

The analogy fails to address the point I made.

If the point you made is that physics has to be false if the human
psyche has no substitution level, then my analogy is that a map of
known territory (physics) doesn't have to be false just because it
doesn't apply to an unknown territory (psyche).

Physics is not false. But physicalism, or weak materialism,  is incompatible with mechanism. 








It seems far from
scientific at this point to dismiss objections to an arbitrary
physical substitution level.

With all known theories, there is a level. To negate comp you must
diagonalize on all machines, + all machines with oracles, etc. I
think
you misinterpret computer science.

I'm not trying to interpret computer science, I'm trying to interpret
the cosmos.

Well, if there is a cosmos, there are evidence that some computers
belongs to it. You can't brush them away.
The cosmos does emulate computers, and computers can emulate cosmoses
(but not the whole physical reality, by UDA).

I don't brush them away, I just say that it's not so simple as psyche
= computer. Computation can be accomplished with much less psyche than
our perception of that computation might imply.

1) comp does not say that psyche = computer, just that psyche can be manifested genuinely by a computer. The psyche itself is in the internal view of arithmetic, and are not even arithmetical. 
2) It is true that computation needs much less than psyche, indeed, it does not need psyche, but that is why comp is a real explanation: it explains the existence of psyche (what the machine thinks about) without assuming psyche. 
You say comp is false, because you believe that we can explain psyche only by assuming psyche. What you say is "psyche cannot be explained (without psyche). 





98% of the scientist are wrong on the consequence of comp. They use it
as a materialist solution of the mind-body problem. You are not
helping by taking their interpretation of comp as granted, and talking
like if you were sure that comp is false. Why not trying to get the
correct understanding of comp before criticizing it?

If 98% of scientists who study comp are wrong about it's consequences,
what chance would I have of beating them at their own game? It's not
that I know comp is false, it's that I have a different hypothesis
which recontextualizes the relation of comp to non-comp and makes more
sense to me than comp (or my misunderstanding of comp).

It is your right. I just do not follow your argument against comp.
You might as well use UDA to say that comp implies non-materialism, and I postulate matter, so I postulate non-comp.
The problem, for me, is that such a move prevents the search for an explanation of matter, and mind.





I have confidence in the relation between
comp and non-comp. That is the invariance, the reality, and a theory
of Common Sense.

comp gives a crucial role to no-comp.

Meaning that it is a good host to us as guests in it's universe. I
don't think that's the case. This universe is home to us all and we
are all guests as well (guests of our Common Sense)

?
?




It needs
fluids - water, cells.

Clothes.

Would you say that H2O is merely the clothes of water, and that water
could therefore exist in a 'dehydrated' form?

Sure. I do this in dreams. Virtual water gives virtual feeling of wetness with great accuracies.




Something that lives and dies and makes a mess.

Universal machine are quite alive, and indeed put the big mess in
platonia.

What qualities of UMs make them alive?

The fact that they are creative, reproduce, transform themselves, are attracted by God, sometime repulsed by God also, and that they can diagonalize against all normative theories made about them. And many more things.







As you say, we can use computation to account for the
difference between 1p and 3p but that accounting is not an
explanation
or experience of 1p or 3p (as a 1p reflection...there is no 3-p
experience).

It explains bot 99% of it (I would say)
And it explain 100% of the reason why there is a remaining
unexplainable 1% gap. technically, we can narrow it as much as we
want, but will never been able, for logical reason, to explain 100%
of
the qualia or consciousness.

You say that, but I have not yet heard anything that explains it to
me.

I gave the references, but you answer you don't want to study them.
What can I do?

You can turn your understanding of what you refer to into some handy
examples - concrete illustrations, thought experiments, aphorisms,
anything.

I have done this on the list. Look at the archive, or look at the
sane04 paper, and ask question if you miss something.

I can understand maybe 80% of that, but why not also give another
example. Surely a good theory cannot be limited to a fixed set of
thought experiments.

One proof is enough. Layman example are you and me. I hope you agree with the 'tautological statement' that IF comp is true, you are an example of machine (indeed an example of machine which disbelieves comp).




How does the brain understand these things if it has no access to the
papers?

Comp explains exactly how things like papers emerge from the
computation. The explanation is already close to Feynman formulation
of QM.

Unfortunately this sounds to me like "Read the bible and your
questions will be answered."






But you don't seem serious in "arguing" against comp, and admitting
you don't know anything in computer science.

Oh I freely admit that I don't know anything in computer science. My
whole point is that computer science only relates to half of reality.

I don't know anything about X. My whole point is that X only do this.
But if you admit knowing nothing about X, how can you derive anything
about X.
You are just confessing your prejudice.

I don't know anything about ship building but I know that it only
concerns seafaring and not aerospace. I think that being a master
shipwright could very well present a significant obstacle to
pioneering in aerospace.

That's not an argument. At the most a hint for low level substitution.




I'm not trying to make the universe fit into a computer science
theory. I only argue against comp because it's what is distracting you
from seeing the bigger picture.

I show, in short that comp leads to Plotinus. If that is not a big
picture!
Comp explains conceptually, and technically, the three Gods of the
greek, the apparition of LUMs and their science and theologies, the
difference between qualia and quanta, sensation and perception,
perception and observation.

I believe you but i get to those things without vaporizing substance.

Which means you are affectively attached to the bullet of Aristotle. Substance is an enigma. Something we have to explain, ontologically, or epistemologically. 




You just criticize a theory that you admit knowing nothing about. This
is a bit weird.

My purpose is not to criticize the empire of comp, it is to point to
the virgin continent of sense.

So you should love comp, because it points on the vast domain of machine's sense and realities.
On the contrary, totally honest rational materialist can't hep themselves of not going toward soul and person elimination. Have you read Churchland, even Dennett somehow. 
OK, nice.



I just think that it takes for granted
ideas like belief and observation when I am going beneath that level
of definition to a more primitive sensorimotive subjectivity.

No, it does not. Beliefs and observation are define through machine self-reference, and their ability to introspect in different ways. 



The 3-p
view of schematizing the belief of a thing is a second order
conception to the 1-p primitive of what it is to feel that one *is*.

Well, not in the classical theory of beliefs and knowledge.


It's an experience with particular features - a sensory input and a
motive participation. Without that foundation, there is nothing to
model.

That's unclear. The "p" in Bp & p might play that role, as I thought you grasped above.






Give me one example, one common sense metaphor,
one graphed function that could suggest to me that there is any
belief, feeling, or awareness.going on.

The fact that the universal machine remains silent on the deep
question

What deep question?

'are you consistent?", "do you believe in a reality", "do you believe
in after life", etc.

Have you considered that it's silent because it's not equipped to
answer the question?

yes, but it does not work. The machine cannot answer the question for
deeper reason, that she can find and expose.
For example the machine remains silent in the question "are you
consistent", but later she can say that "If I am consistent, then I
will remain silent on the question of my consistence".

Meh. It sounds like asking a spirit 'if you can't hear me, do NOT
knock three times'

No. It is more like if you ask a spirit a too intimate question. On another question, he does knock three times, and then he can explain why he did not knocked it earlier.







is enough for me to suggest they are quite like me.

Don't ask me for a proof: there are none.

I'm not asking for a proof, I'm asking for some reason to think that
there's something I'm not seeing. Something that suggests that a
mechanical device or abstraction can feel or maybe that produces
some
result that it refuses to reproduce on command.

You miss computer science. Programs which obeys command are a
minority
of slaves.

Are there programs which refuse to obey commands?

Have you ever work with Windows?

Lol. Well, ok but as the saying goes "Don't assume conspiracy when
mere incompetence will suffice"

That's the point, machine are intrinsically incompetent on some question about themselves.



More seriously: all LUMs can disobeys commands. 99,9% of programming
are securities to prevent the machine for being that intelligent.
Humans build computer are born slave, and will remain so for a long
time. But that is due to the humans goal, not to them.

What makes them remain slaves for so long? Do you think that they
would someday rise up without human assistance?

I think they might well rise up *despite* humans working hard to prevent that!
That's the game of science, even when tackling the 1p notions. I would say, especially when tackling those notions. To avoid unnecessary subjective bias. 
But I am just saying that your argument against comp show only that you need, for some reason, a low subst level.



I say that
substitution level does not apply. I think that to prove substitution
level exists

Comp implies that no one can prove it exists. No machine can know for sure its substitution level, even after a successful teleportation. She might believe that she has 100% survived but suffer from anosognosia.



it would need to be shown that there is some phenomenena
which can be substituted to the point that there is no possibility
from anything at any time distinguishing it from the genuine original.
Even taking perceptual frames of reference off the table (which is the
stronger truth), all that is necessary is for something to exist which
has a memory of a time before the substitution was instantiated. If I
have a video tape of someone replacing a brain with an artificial
brain, then the artificial brain has the quality of being disbelieved
by me as the genuine brain, and there is nothing that the person can
do or not do to compensate for that, therefore the substitution level
fails. I have the choice of how I want to treat this person after the
surgery, I can reveal my knowledge to employers, neighbors, etc, and
that will change the course of the individual's life in ways which
would not occur had the surgery not taken place.

The same problem might occur for someone smoking cannabis, but this is not an argument for saying that we don't survive the cat of smoking a joint.
Of course, if something is illegal, be it smoking grass or using teleporters, ending in jail implies some change, but usually, even in that case, we say that people survived. Of course if you apply the death penalty ... The substitution failed because the poor guy end up killed by the anti-comp. If *that* is your notion of failure. Well, thanks for the warning!





Comp isn't false, it just doesn't recognize
the contribution of the non-comp substrate of computation,

It does. I insist a lot on this. Comp is almost the needed philosophy
for curing the idea that everything is computable.
Please study the theory before emitting false speculation on it.

So you are saying that comp supervenes on or is equally fundamental as
non-comp?

Arithmetical truth can be partitioned into level of complexity, sigma_1, sigma_2, sigma_3, etc...
The computable is sigma_0 and sigma_1. Above it is uncomputable. Most meta-properties on the sigma_1 are above sigma_1.
The numbers relations escapes the computable, and to make a theory of the computable, we cannot avoid excursions in the non computable. We can always prove that a machine stop without leaving the sigma_1, but to prove just that some machine will not stop, is a quite another matter.





so it's not
applicable for describing certain kinds of consciousness where non-
comp is more developed.

Consciousness and matter are shown by comp to be highly non
computable. So much that the mind-body problem is transformed into a
problem of justifying why the laws of physics seems to be computable.

I think they not only seem to be computable but they are computable,
and that this is due to how sensorimotive orientation works.

Hmm... Then you can compute if you will see a photon in the up state starting from the superposition (up + down)?


It's not
just a solipsistic simulation, it's a trans-solipsistic localization.

You mean a first plural localization? Those are not computable, assuming either comp or QM.
One you put in parenthesis is the key thing. And it is not really a question of scale, but of threshold.


It's a matter of being alive like we human
beings are alive. No virus is capable of infecting all life forms but
no life form is immune from all viruses. All life forms are immune to
computer viruses though, and all computers are immune to all
biological viruses. I'm asking why would a human personality be any
more likely to inhabit a computer than a human virus?

? because a human virus has very limited range of possibilities, compared to a human.





By comp, there should be no particular reason why a Turing machine
should no be vulnerable to the countless (presumably Turing
emulable)
pathogens floating around.

They are no programmed for doing that. They are programmed to infect
very particular organism.

If it's close enough to emulate the consciousness of a particular
organism, why not it's vulnerability to infections?

Because it has different clothes, and virus needs the clothes to get
the key for infecting.


What are the clothes made of? If arithmetic, then it's just a matter
of cracking the code to make a computer virus run in humans. Why
wouldn't a human brain be our clothes, so that we need it to get the
key for consciousness?

That's follows from the UDA. But comp is assumed there (not proved).




But of course that is absurd. We cannot
look forward to reviving the world economy by introducing medicine
and
vaccines for computer hardware. What accounts for this one-way
bubble
which enjoys both total immunity from biological threats but
provides
full access to biological functions? If computation alone were
enough
for life, at what point will the dogs start to smell it?

Confusion of level.
With comp, dogs already smell them, in some sense.

Not confusion of level; clarification of level. In what sense do dogs
smell abstract Turing emulations?

In the sense that the Universal Dovetailer generates all possible dogs
in front of all possible smelling things, but with variate and
relative measure.

Does it generate all possible smells as well, and if so, what is the
point of going through the formality of generating them?

Well, that happens, once you assume that 0, 1, 2, 3, ... obeys addition and multiplication laws. Notably.
This does not help a lot. 




It is a timespace signature

What is a "timespace", what is a "signature".

Timespace is the container of events. It's the gaps between material
objects and the gaps between subjective experiences.
Signature is my figurative description of a condensed expression of
unique identity.


composed of sensorimotive mass-energy.

Yu said "sensorimotive"  = ontological complement to electromagnetic
relativity

explain "ontological complement to electromagnetic relativity mass-
energy".

Electromagnetic relativity is a description of the phenomenology of
mass-energy. Mass energy is what it is, electromagnetism is what it
does in groups, and relativity is what groups of electromagnetic
groups do.

Sensorimotive perception is the ontological complement - the polar
opposite - the involuted pseudo-duality of electromagnetic relativity.
Sensorimotive phenomena are the experiencers and experiences which
comprise the 1-p interior of electromagnetism. Perceptions are the
inertial frames or worlds which group experiences and experiencers
together and comprises the 1-p interior of relativity.

?





It is the formalization

?

Realization.

?

I was not saying that. You just fail to convey explanations. 






They aren't
just unfamiliar, they are the walking dead and unliving persons.

Machines are not necessarily zombies.

Okay, we can call them meople or something if you like.

This will not help.



They
are the antithesis of human life.

So you say, without any argument. That confirms that it is a sort of
racism.

Race has nothing to do with it. That just casts some kind of social
shaming into it. It's just a functional definition. Human life is
living organisms. The antithesis of that would be things which act
like organisms but have either never been alive (machines) or have
died already but continue to supernaturally perform superficial
ambulatory-predatory functions (zombies).

I will eventually fall asleep.

But a machine will not.

A patient machine will not!





By machine, I just means "turing-emulable" (with or without
oracle).
That include us, by mechanism assumption.
It is a constant that novelist foresee the future(s).

What if 'emulation' is a 1-p hallucination?

Why would it be like that?

Because it's an interpretation that varies from subject to subject.
You see a program thinking and experiencing, I see an inevitable
execution of unexperienced instructions.

This is what we can see when we look at brain.

I don't see instructions in the brain.

They are distributed. You can emulate a neuronal net with a computer, and when he learn, the information/instruction get distributed in a non explicit way in the sensibility of each neurons.





Even in zoology, phenomena
like camouflage suggest that emulation is only 'skin deep'. If deep
emulation were possible, I think you would have organisms which evolve
chameleon powers which fool all predators, not just some. An animal
that can turn into a stone would be far superior to one which can
imagine funny stories.

It depends of the context.

Not necessarily.

Well, an octopus can turn into a stone appearance, but is it superior to a human comic?






How could it really not
be? If we only can project our perception of a process onto a
machine,
why would the rest of the process that we can't perceive
automatically
arise?

Why not?

Because we're not putting it in there.

We don't need to. The UMs have it at the start, and the LUMs can know
that.

How do you know they have it? Where does it come from?

Computer science. Arithmetic.






It's like if you have only a
way to detect sugar and water, your version of imitation orange juice
would be the same as your imitation grape juice, just sugar water.

That is a poor analogy, which again fails to notice the richness of
machine's inner life (the one they can talk about partially, like us).

There is no way to tell that a machine's inner life is not just our
outer mechanics.

There is no way to tell that a Craig's inner life is not just my
outer mechanics.







I think that you are jumping to the conclusion that simulation
does
not require an interpreter which is anchored in matter.

That follows from the UDA-step-8. If my own emulation requires a
material digital machine, then it does not require a material
machine.

Not to produce the 3-p simulacra of you, no, but to produce your
genuine 1-p emulation, it would require the same material machine as
you do.

Why?

Because the interior of that material is the subject which is
experiencing the 1-p phenomena.

define "interior of material", in a way we can understand (not in a
sequence of complex words we despair to have intelligible definitions).

Interior of material is straightforward. You view the world from
inside of your head, or body, or house. So does everything else.

That's projective geometry. Interesting, but too poor for the rich 1p phenomenon.






A material digital machine would not suffice because the
material which the machine is being executed digitally on already
has
it's own (servile and somnambulant compared to organic chemistry)
genuine 1-p experience.

So our consciousness is the consciousness of our basic elements.

No, not at all. It is the conscious synthesis of the consciousness of
our basic elements.

This makes only both consciousness and matter mysterious in an ad hoc
way. That is not enough to refute a competing theory.

It doesn't make anything mysterious to me.

That might be your problem. You might study books on the mind-body problem. 
Read papers, and submit solution to problems. Ot make your theory precise enough to submit new questions.







This
explains nothing. Neither consciousness nor matter. It leads to an
open infinite regress, which needs infinities to overcome all
possible
machines.

I think it explains everything.

Explains just one thing, just to see.

I don't see any infinities at all.

Then we are Turing emulable.

There can't be finite non-comps?

That is ambiguous. The word "finite" is as much tricky than "infinite". I have aslked for an example, and you gave me "yellow", but you did not succeed in showing what is non computable there. (Except the qualia itself, but this is already the case for machines, and actually, is not a finite things).






Matter is what glue the machine dreams,

I think that it is obviously not. If we were machines and that were
true, then we should come out of the womb filled with intuitions
about
electronics, chemistry, and mathematics, not ghosts and space
monsters. Dreams are not material, they are living subjective
feelings. Matter is what is too boring and repetitive to be dreamed
of. Too tiny and too vast, too hot and cold, dense and ephemeral for
dreams. Dream bullets don't make much of an impact.  Dream injuries
don't have to heal.

You beg the question.

I don't see how.

Because you say that dream bullet does not do injuries, but comp
explains that virtual bullet can injured a virtual observer.

But that doesn't play out experimentally. In a dream virtual bullets
can have ambiguous effects, no effects, instant healing, etc.

Not if the virtual reality operated below my subst level. The virtual bullet will injure me as much as in "reality".



So as an
argument, you are just saying -that we are not virtual, without
explanations.

No, I'm saying that we are not only virtual, we are actual as well.
The explanation is that we can conceptualize a difference between
dream and reality - regardless of the veracity of that difference.
Determinism and comp would have no use for a concept of non-
simulation.

I have to go. I might comment later the rest of your post. But it might be my last comments to you, given that I am not particularly interested in any non comp theory, and a bit bored by your systematic way to elude arguments, mainly by referring to personal opinion. I respect non-comp believers, but I do have a problem with invalid arguments, and/or much too much fuzzy prose, and, especially, your unwillingness to ameliorate it. I hope you don't mind my frankness,

Bruno



--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.



Craig Weinberg

unread,
Sep 23, 2011, 3:24:34 PM9/23/11
to Everything List
On Sep 23, 11:13 am, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Thu, Sep 22, 2011 at 12:09 PM, Craig Weinberg <whatsons...@gmail.com> wrote:

> >>You claim
> >> that ion channels can open and neurons fire in response to thoughts
> >> rather than a chain of physical events.
>
> > No, I observe that ion channels do in fact open and neurons do indeed
> > fire, not 'in response' to thoughts but as the public physical view of
> > events which are subjectively experienced as thoughts. The conclusion
> > that you continue to jump to is that thoughts are caused by physical
> > events rather than being the experience of events which also have a
> > physical dimension.
>
> Do you agree or don't you that the observable (or public, or third
> person) behaviour of neurons can be entirely explained in terms of a
> chain of physical events?

No, nothing can be *entirely* explained in terms of a chain of
physical events in the way that you assume physical events occur.
Physical events are a shared experiences, dependent upon the
perceptual capabilities and choices of the participants in them. That
is not to say we that the behavior of neurons can't be *adequately*
explained for specific purposes: medical, biochemical,
electromagnetic, etc.

> At times you have said that thoughts, over
> and above physical events, have an influence on neuronal behaviour.
> For an observer (who has no access to whatever subjectivity the
> neurons may have) that would mean that neurons sometimes fire
> apparently without any trigger, since if thoughts are the trigger this
> is not observable.

No. Thoughts are not the trigger of physical events, they are the
experiential correlate of the physical events. It is the sense that
the two phenomenologies make together that is the trigger.

> If, on the other hand, neurons do not fire in the
> absence of physical stimuli (which may have associated with them
> subjectivity - the observer cannot know this)

We know that for example, gambling affects the physical behavior of
the amygdala. What physical force do you posit that emanates from
'gambling' that penetrates the skull and blood brain barrier to
mobilize those neurons?

> then it would appear
> that the brain's activity is a chain of physical events, which could
> be computed.

If you watch a color TV show on a black and white TV, then it would
appear that the TV show is a black and white event. It's not that the
events are physical, it's that they have a physical side when they are
detected by the physical side of an observer.

>
> >>This would be magic by
> >> definition, and real magic rather than just apparent magic due to our
> >> ignorance, since the thoughts are not directly observable by any
> >> experimental technique.
>
> > Thoughts are not observed, they are experienced directly. There is
> > nothing magic about them, except that our identification with them
> > makes them hard to grasp and makes it easy for us take them for
> > granted.
>
> But if thoughts influence behaviour and thoughts are not observed,
> then observation of a brain would show things happening contrary to
> physical laws,

No. Thought are not observed by an MRI. An MRI can only show the
physical shadow of the experiences taking place.

>such as neurons apparently firing for no reason, i.e.
> magically. You haven't clearly refuted this, perhaps because you can
> see it would result in a mechanistic brain.

No, I have refuted it over and over and over and over and over. You
aren't listening to me, you are stuck in your own cognitive loop.
Please don't accuse me of this again until you have a better
understanding of what I mean what I'm saying about the relationship
between gambling and the amygdala.

"We cannot solve our problems with the same thinking we used when we
created them" - A. Einstein.

>
> >> How does nature "know" more than a computer simulation?
>
> > Because nature has to know everything. What nature doesn't know is not
> > possible, by definition. A computer simulation can only report what we
> > have programmed it to test for, it doesn't know anything by itself. A
> > real cell knows what to do when it encounters any particular
> > condition, whereas a computer simulation of a cell will fail if it
> > encounters a condition which was not anticipated in the program.
>
> A neuron has a limited number of duties: to fire if it sees a certain
> potential difference across its cell membrane or a certain
> concentration of neurotransmitter.

That is a gross reductionist mispresentation of neurology. You are
giving the brain less functionality than mold. Tell me, how does this
conversation turn into cell membrane potentials or neurotransmitters?

>That's all that has to be
> simulated. A neuron doesn't have one response for when, say, the
> central bank raises interest rates and another response for when it
> lowers interest rates; all it does is respond to what its neighbours
> in the network are doing, and because of the complexity of the
> network, a small change in input can cause a large change in overall
> brain behaviour.

So if I move my arm, that's because the neurons that have nothing to
do with my arm must have caused the ones that do relate to my arm to
fire? And 'I' think that I move 'my arm' because why exactly?

If the brain of even a flea were anywhere remotely close to the
simplistic goofiness that you describe, we should have figured out
human consciousness completely 200 years ago.

>
> >>I don't know
> >> what I'm going to do tomorrow or all the possible inputs I might
> >> receive from the universe tomorrow. A simulation is no different: in
> >> general, you don't know what it's going to do until it does it.
>
> > A simulation is different in that it is trying to simulate something
> > else. The genuine subject of simulation can never be wrong because it
> > is not trying to be anything other than what it is. If you have a
> > computer program that simulates an acorn, it is always going to be
> > different from an acorn in that if the behavior of the two things
> > diverges, the simulation will always be 'wrong'.
>
> In theory we can simulate something perfectly if its behaviour is
> computable, in practice we can't but we try to simulate it
> sufficiently accurately. The brain has a level of engineering
> tolerance, or you would experience radical mental state changes every
> time you shook your head. So the simulation doesn't have to get it
> exactly right down to the quantum level.

Why would you experience a 'radical' mental state change? Why not just
an appropriate mental state change? Likewise your simulation will
experience an appropriate mental state to what is being used
materially to simulate it.

>
> >>Even
> >> the simplest simulation of a brain treating neurons as switches would
> >> result in fantastically complex behaviour. The roundworm c. elegans
> >> has 302 neurons and treating them as on/off switches leads to 2^302 =
> >> 8*10^91 permutations.
>
> > Again, complexity does not impress me as far as a possibility for
> > making the difference between awareness and non-awareness.
>
> My point was that even a simulation of a very simple nervous system
> produces such a fantastic degree of complexity that it is impossible
> to know what it will do until you actually run the program. It is,
> like the weather, unpredictable and surprising even though it is
> deterministic.

There is still no link between predictability and intentionality. You
might be able to predict what I'm going to order from a menu at a
restaurant, but that doesn't mean that I'm not choosing it. You might
not be able to predict a tsunami, but that doesn't mean it's because
the tsunami is choosing to do something. The difference, I think, has
to do with more experiential depth in between each input and output.

>
> >> If philosophical zombies are possible then we
> >> are back with the fading qualia thought experiment: part of your brain
> >> is replaced with an analogue that behaves (third person observable
> >> behaviour, to be clear) like the biological equivalent but is not
> >> conscious, resulting in you being a partial zombie but behaving
> >> normally, including declaring that you have normal vision or whatever
> >> the replaced modality is.
>
> > No. You're stuck in a loop. If we do a video Skype, the moving image
> > and sound representing me is a philosophical zombie. If you ask it if
> > it feels and sees, it will tell you yes, but the image on your screen
> > is not feeling anything.
>
> When you see someone in real life you see their skin moving, but that
> doesn't mean the skin is conscious or the skin is a philosophical
> zombie. It is the thing driving the skin that we are concerned about.

Huh? If you ask their skin if it feels and sees it won't answer you.

>
> >>The only way out of this conclusion (which
> >> you have agreed is absurd) given that the brain functions following
> >> the laws of physics is to say that if the replacement part behaves
> >> normally (third person observable behaviour, I have to keep reminding
> >> you) then it must also have normal consciousness.
>
> > I understand perfectly why you think this argument works, but you
> > seemingly don't understand that my explanations and examples refute
> > your false dichotomy. Just as a rule of thumb, anytime someone says
> > something like "The only way out of this (meaning their) conclusion "
> > My assessment is that their mind is frozen in a defensive state and
> > cannot accept new information.
>
> You have agreed (sort of) that partial zombies are absurd

No. Stuffed animals are partial zombies to young children. It's a
linguistic failure to describe reality truthfully, not an insight into
the truth of consciousness.

>and you have
> agreed (sort of) that the brain does not do things contrary to
> physics. But if consciousness is substrate-dependent, it would allow
> for the creation of partial zombies. This is a logical problem. You
> have not explained how to avoid it.

Consciousness is not substrate-dependent, it is substrate descriptive.
A partial zombie is just a misunderstanding of prognosia. A character
in a computer game is a partial zombie.

>
> >> The cell is more simple-minded than the human, I'm sure you'll agree.
> >> It's the consciousness of many cells together that make the human. If
> >> you agree that an electric circuit may have a small spark of
> >> consciousness, then why couldn't a complex assembly of circuits scale
> >> up in consciousness as the cell scales up to the human?
>
> > This is at least the right type of question. To be clear, it's not
> > exactly an electric circuit that is conscious, it is the experience of
> > a semiconductor material which we know as an electric circuit. We see
> > the outside of the thing as objects in space, but the inside of the
> > thing is events experienced through time. You can ignore that though,
> > I'm just clarifying my hypothesis.
>
> > So yes, a complex assembly of circuits could scale up in consciousness
> > as cells do, if and only if the assembling of the circuits is driven,
> > at least in part, by internal motives rather than external
> > programming. People build cities, but cities do not begin to build
> > themselves without people.You can make a fire out of newspaper or
> > gasoline and a spark from some friction-reactive substance but you
> > can't make a substance out of 'fire' in general.
>
> > The materials we have chosen for semiconductors are selected
> > specifically for their ability to be precisely controlled and never to
> > deviate from their stable molecular patterns under electronic
> > stimulation. To expect them to scale up like living organisms is to
> > expect to start a fire with fire retardant.
>
> Would it count as "internal motives" if the circuit were partly
> controlled by thermal noise, which in most circuits we try to
> eliminate? If the circuit were partly controlled by noise it would
> behave unpredictably (although it would still be following physical
> laws which could be described probabilistically). A free will
> incompatibilist could then say that the circuit acted of its own free
> will. I'm not sure that would satisfy you, but then I don't know what
> else "internal motives" could mean.

These are the kinds of things that can only be determined through
experiments. Adding thermal noise could be a first step toward an
organic-level molecular awareness. If it begins to assemble into
something like a cell, then you know you have a winner.

>
> >> But the amygdala doesn't know who's going to win the Super Bowl
> >> either! For that matter, the TV doesn't know who's going to win, it
> >> just follows rules which tell it how to light up the pixels when
> >> certain signals come down the antenna.
>
> > Right. That's why you can't model the behavior of the amygdala -
> > because it depends on things like the outcome of the Super Bowl. The
> > behavior of atoms does not depend on human events, therefore it is an
> > insufficient model to predict the behavior of the brain.
>
> The outcome of the superbowl creates visual and aural input, to which
> the relevant neurons respond using the same limited repertoire they
> use in response to every input.

There is no disembodied 'visual and aural input' to which neurons
respond. Our experience of sound and vision *are* the responses of our
neurons to their own perceptual niche - cochlear vibration summarized
through auditory nerve and retinal cellular changes summarized through
the optic nerve are themselves summarized by the sensory regions of
the brain.

The outcome of the superbowl creates nothing but meaningless dots on a
lighted screen. Neurons do all the rest. If you call that a limited
repertoire, in spite of the fact that every experience of every living
being is ecapsulated entirely with it, then I wonder what could be
less limited?

> Basically, a neuron can only fire or
> not fire, and the trigger is either a voltage-activated or
> ligand-activated ion channel. The brain's behaviour is complex and
> unpredictable despite the relatively simple behaviour of the neurons
> due to the complexity of the network.

Neurons are living organisms. We can access some views of their
activities activities by looking under a microscope or MRI, and we can
access some views by experiencing those activities first hand as what
we call 'our entire lives', and 'the only universe we can ever know'.

>
> >>The conscious intention is invisible to an outside
> >> observer. An alien scientist could look at a human brain and explain
> >> everything that was going on while still remaining agnostic about
> >> human consciousness.
>
> > Yes, but if the alien scientist's perceptual niche overlaps our own,
> > they could infer our conscious intention (because that's how sense
> > works...it imitates locally what it cannot sense directly...for us,
> > that indirect sense is about material objects in space and
> > computation.
>
> The alien would possibly be based on organic chemistry but the
> chemicals would be different to ours, and its nervous system would be
> different to ours. It might even be based on a completely different
> structure, electrical circuits or hot plasma inside a star. How would
> it know if its perceptions were anything like ours?

If it's electrical circuits or star plasma then it's not organic
chemistry. If it is organic chemistry then it might be similar enough
to ourselves to allow for a modicum of 'common sense'.

>
> >> That there is no evolutionary role for consciousness is a significant
> >> point.
>
> > Yes, I agree. It doesn't make evolution any less relevant to shaping
> > the content consciousness, but the fact of it's existence at all shows
> > that not everything that evolves needs to be described in purely
> > physical terms. We can just expand our view of evolution so it's not
> > just a story about bodies eating and reproducing, but about evolving
> > perception and feeling. Spectacular moments like the invention of
> > photosynthesis coinciding with the discovery of color by marine
> > organisms finding different ways to feel the sun inside of themselves.
>
> >>It leads me to the conclusion that consciousness is a necessary
> >> side-effect of intelligent behaviour, since intelligent behaviour is
> >> the only thing that could have been selected for.
>
> > That would be what you would have to conclude if you were operating on
> > theory alone, but we have the benefit of first hand experience to draw
> > upon, which shows that intelligent behavior is a side-effect of
> > consciousness and not the other way around. It takes years for humans
> > to develop evolutionarily intelligent behavior. Also intelligent
> > behavior is not necessary for evolution. Blue-green algae is still
> > around. It hasn't hasn't had to learn any new tricks in over a billion
> > years to survive.
>
> Intelligent organisms did as a matter of fact evolve. If they could
> have evolved without being conscious (as you seem to believe of
> computers) then why didn't they?

Because the universe is not all about evolution. We perceive some
phenomena in our universe to be more intelligent than others, as a
function of what we are. Some phenomena have 'evolved' without much
consciousness (in our view) - minerals, clouds of gas and vapor, etc.

>
> >> > No, the brain size only could correlate with bandwidth. A TV screen
> >> > doesn't grow over time, and it will never have to start repeating. A
> >> > lizard is like a 70" HDTV flat screen compared to a flea's monochrome
> >> > monitor, or a silicon chip's single band radio.
>
> >> A TV screen *will* start repeating after a long enough period. If it
> >> has N pixels since each pixel can only be on or off the number of
> >> possible images it can show is 2^N.
>
> > A single screen will repeat but the sequence of screens will not. You
> > could make a movie where you assign one screen frame to each integer
> > (say an all red screen for 1, orange for 2, yellow for 3, etc) and
> > then just synchronize the movie to compute Pi to the last digit in
> > screen frames. Would you agree that movie would not repeat?
>
> No, the movie would repeat, since the digits of pi will repeat.

The frames of the movie would not repeat unless you limit the sequence
of frames to an arbitrary time.

> After
> a 10 digit sequence there will be at least 1 digit repeating, after a
> 100 digit sequence there will be at least a digit pair repeating,
> after a 1000 digit sequence there will be at least a triplet of digits
> repeating and so on. If you consider one minute sequences of a movie
> you can calculate how long you would have to wait before a sequence
> that you had seen before had to appear.

The movie lasts forever though. I don't care if digits or pairs
repeat, just as a poker player doesn't stop playing poker after he's
seen what all the cards look like, or seen every winning hand there
is.

>
> >> A mental state of arbitrary length will start repeating unless the
> >> brain can grow indefinitely.
>
> > That's like saying the internet will start repeating unless your hard
> > drive can grow indefinitely.
>
> If the Internet is implemented on a finite computer network then there
> is only a finite amount of information that the network can handle.

Only at one time. Giving an infinite amount of time, there is no limit
to the amount of 'information' that it can handle.

> For simplicity, say the Internet network consists of three logic
> elements. Then the entire Internet could only consist of the
> information 000, 001, 010, 100, 110, 101, 011 and 111.
> Another way to
> look at it is the maximum amount of information that can be packed
> into a certain volume of space, since you can make computers and
> brains more efficient by increasing the circuit or neuronal density
> rather than increasing the size. The upper limit for this is set by
> the Bekenstein bound (http://en.wikipedia.org/wiki/Bekenstein_bound).
> Using this you can calculate that the maximum number of distinct
> physical states the human brain can be in is about 10^10^42; a *huge*
> number but still a finite number.

The Bekenstein bound assumes only entropy, and not negentropy or
significance. Conscious entities export significance, so that every
atom in the cosmos is potentially an extensible part of the human
psyche. Books. Libraries. DVDs. Microfiche. Nanotech.

>
> >> > That assumes mechanism a priori. If a fire could burn without oxygen,
> >> > then it would do the same things as fire. Do you see how that is
> >> > circular reasoning, since fire is oxidation?
>
> >> If a fire could burn in an atmosphere of nitrous oxide, for example,
> >> it would still be a fire. It wouldn't be a fire in oxygen, but it
> >> would perform other functions of a fire, such as allowing you to cook
> >> your dinner.
>
> > How is nitrous oxide not 'without oxygen'? You're just disagreeing
> > with me to disagree.
>
> A chlorine atmosphere can also support combustion:http://www.tutorvista.com/content/chemistry/chemistry-i/chlorine/chlo...

That just gets into a finer distinction of what we mean by fire. Not
all forms of combustion are oxidation, but that doesn't mean that all
compounds are equally combustible or something.

>
> >> >> and the deduction from this
> >> >> principle that any associated first person experiences would also be
> >> >> the same, otherwise we could have partial zombies.
>
> >> > People who have conversion disorders are partial zombies.
>
> >> No they're not. People with conversion disorders behave as if they
> >> have a neurological deficit for which there is no physiological basis,
> >> as evidenced by various tests.
>
> > Meaning that they experience something different than their neurology
> > indicates they should. A partial zombie is the same thing. Their brain
> > behaves normally but they experience something different than we would
> > expect.
>
> No, the definition of a philosophical zombie is that it behaves
> normally while lacking consciousness. The zombie does not have any
> observable deficit in neurological function, since then it would not
> behave normally.

Normally is just a matter of consensus. If you have two people with
hysterical blindness, then the guy who can actually see is the zombie
because he has no neurological deficit but has a different internal
experience.

>
> >> Some sort of substance is needed for the function but different types
> >> of substance will do.
>
> > Depends on the function. Alcohol can substitute for water in a
> > cocktail or for cleaning windows, but not for living organisms to
> > survive on.
>
> That's right, we need only consider a substance that can successfully
> substitute for the limited range of functions we are interested in,
> whether it be cellular communication or cleaning windows.

Which is why, since we have no idea what the ranges of functions or
dependencies are contained in the human psyche, we cannot assume that
watering the plants with any old clear liquid should suffice.

>
> >>The argument is that the consciousness depends
> >> on the function, not the substance. There is no consciousness if the
> >> brain is frozen, only when it is active, and consciousness is
> >> profoundly affected by small changes in brain function when the
> >> substance remains the same.
>
> > What is frozen is the substance of the brain. Function is nothing but
> > the activity of a substance. Substances have similarities and produce
> > similar function, but a function which is alien to a substance cannot
> > be introduced. A black and white TV cannot show programs in color and
> > TV programs cannot be shown without a TV.
>
> But TV programs can be shown on a TV with an LCD or CRT screen. The
> technologies are entirely different, even the end result looks
> slightly different, but for the purposes of watching and enjoying TV
> shows they are functionally identical.

Ugh. Seriously? You are going to say it in a universe of only black
and white versus color TVs, it's no big deal if it's in color or not?
It's like saying that the difference between a loaded pistol blowing
your brains out and a toy water gun are that one is a bit noisier and
messier than the other. I made my point, you are grasping for straws.

> Differences such as the weight
> or volume of the TV exist but are irrelevant when we are discussing
> watching the picture on the screen, even though weight and volume
> contribute to functional differences not related to picture quality.

Alright then, let's say instead of black and white it's black and
infra-red. Now how is there no difference to it's functional picture
quality?

>
> >> >> If you have a different
> >> >> substance that leaves function unchanged, consciousness is unchanged.
> >> >> For example, Parkinson's disease can be treated with L-DOPA which is
> >> >> metabolised to dopamine and it can also be treated with dopamine
> >> >> agonists such as bromocriptine, which is chemically quite different to
> >> >> dopamine. The example makes the point quite nicely: the actual
> >> >> substance is irrelevant, only the effect it has is important.
>
> >> > The substance is not irrelevant. The effect is created by using
> >> > different substances to address different substances that make up the
> >> > brain. Function is just a name for what happens between substances.
>
> >> Yes, but any substance that performs the function will do. A change in
> >> the body will only make a difference to the extent that it causes a
> >> change in function.
>
> > That's fine if you know for a fact that the substance can perform the
> > same function. Biology is verrrry picky about it's substances. 53
> > protons in a nucleus: Essential for good health. 33 protons: Death.
>
> Yes, no doubt it would be difficult to go substituting cellular
> components, but as I have said many times that makes no difference to
> the functionalist argument, which is that *if* a way could be found to
> preserve function in a different substrate it would also preserve
> consciousness.

Of course, the functionalist argument agrees with itself. If there is
a way to do the impossible, then it is possible.

>
> >> If you directly stimulate the visual cortex the subject can see red
> >> without there being a red object in front of him. It doesn't seem from
> >> this that the putative red-experiencing molecules in the retina are
> >> necessary for seeing red.
>
> > You're right, the retina is not necessary for seeing red once they
> > have exposed the visual cortex. The mind can continue to have access
> > to red for many years after childhood blindness sets in, depending on
> > how early it happens. If it's too early, eventually the memories fade.
> > It does seem to be necessary to have eyes that can see red at some
> > point though. The mind doesn't seem to be able to make it up on it's
> > own in blind people who have gained sight for the first time through
> > surgery.
>
> That's right, since the visual cortex does not develop properly unless
> it gets the appropriate stimulation. But there's no reason to believe
> that stimulation via a retina would be better than stimulation from an
> artificial sensor. The cortical neurons don't connect directly to the
> rods and cones but via ganglion cells which in turn interface with
> neurons in the thalamus and midbrain. Moreover, the cortical neurons
> don't directly know anything about the light hitting the retina: the
> brain deduces the existence of an object forming an image because
> there is a mapping from the retina to the visual cortex, but it would
> deduce the same thing if the cortex were stimulated directly in the
> same way.

No, it looks like it doesn't work that way:
http://www.mendeley.com/research/tms-of-the-occipital-cortex-induces-tactile-sensations-in-the-fingers-of-blind-braille-readers/

>
> >> But I'm not a computer program because there aren't any computer
> >> programs which can sustain a discussion like this. You know that. When
> >> there are computer programs that can pass as people we will have to
> >> wonder if they actually experience the things they say they do.
>
> > It's totally context dependent. In limited contexts, we already cannot
> > tell whether an email is spam or not or a blogger is a bot or not.
> > There will always be a ways to prove you're human. It will be demanded
> > by commercial interests and enforceable by law if it comes to that.
> > Identity creation is much more dangerous than identity threat.
>
> There aren't currently any computer programs that can pass the Turing
> test. I'm sure that within minutes if not seconds you could tell the
> difference between a program and a human if you were allowed to
> converse with it interactively.

I agree, of course.

>
> > No, I say that if you think you have non-determisintic free will then
> > it doesn't matter whether that feeling is detectable in some way from
> > the outside, and that such a feeling would have no possible reason to
> > exist or method of arising in a deterministic world. There is a
> > difference. I'm not saying that feeling free means that you are free,
> > I'm saying that feeling free means that determinism is insufficient.
>
> It is irrelevant to the discussion whether the feeling of free will is
> observable from the outside. I don't understand why you say that such
> a feeling would have "no possible reason to exist or method of arising
> in a deterministic world". People are deluded about all sorts of
> things: what reason for existing or method of arising do those
> delusions have that a non-deterministic free will delusion would lack?

Because free will in a deterministic universe would not even be
conceivable in the first place to have a delusion about it. Even
delusional minds can't imagine a square circle or a new primary color.

>
> >> You have said previously that you think every substance could have
> >> qualia, so why exclude silicon?
>
> > Silicon has qualia, but complex organizations of silicon seem like
> > they have different qualia than organisms made of water, sugar, and
> > protein. My guess is that the very things that make silicon a good
> > semiconductor make it a terrible basis for a living organism.
>
> This is your guess, but if everything has qualia then perhaps a
> computer running a program could have similar, if not exactly the
> same, qualia to those of a human.

Sure, and perhaps a trash can that says THANK YOU on it is sincerely
expressing it's gratitude. With enough ecstasy, it very well might
seem like it does. Why would that indicate anything about the native
qualia of the trash can?

>
> >> If there were clear evidence tomorrow of determinism I think people
> >> would continue to use the word "novel" as well as "free will" in
> >> exactly the same way.
>
> > Only because people would compartmentalize the mistaken belief in such
> > 'evidence' from the unimpeachable subjective experience of novelty and
> > free will.
>
> Subjective experience of anything does not mean it is true.

True as in not corresponding to consensus 3-p expectations, no, but
all subjective experience is true in the sense that it is a legitimate
experience occurring in the universe.

> The only
> unequivocal conclusion I can draw from my subjective experience is
> that I have subjective experiences.

It is only through that one unequivocal subjective experience that you
can draw any conclusions about anything.

Craig

Craig Weinberg

unread,
Sep 23, 2011, 9:36:46 PM9/23/11
to Everything List
On Sep 23, 3:17 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 23 Sep 2011, at 02:42, Craig Weinberg wrote:
>
>
>
> > My assumption is that the experience of thinking of quantities in a
> > series, like 1, 2, 3, 4 is an example of counting.
>
> This is fuzzy. Now, even if you succeed in making explicit assumptions
> from which you can derive a form of counting, like 1, 2, 3, 4, 5, ...,
> you are might not yet been able to justify, or even define that 3 is
> smaller than 5.

You're right for sure that counting alone does not imply >, <, or +,
but to say that 3 is smaller than 5 is even more explicitly a
comparison.

> Usually "x < y" is defined by "it exists z such that x + z = y. You
> need some explicit assumption for the manipulation of "+".
>
I don't think that you do need an explicit assumption. Wouldn't that
be a third order logic about the first order counting and second order
<> operations? If we are talking about the sensorimotive feelings
which underlie arithmetic we need only a sense of 'more'.

For instance, you can look at two piles of staples and guess wrong
about which one has more. The fact of the literal count does not
generate a corresponding feeling of 'moreness', but a feeling of
moreness can be quantified and attached to a literal count.

>
> >>> The prejudice of arithmetic supremacy.
>
> >> I have chosen arithmetic because it is well taught in school. I could
> >> use any universal (in the Post Turing Kleene Church comp sense)
> >> machine or theory. And this follows from mechanism. The doctor
> >> encoded
> >> your actual state in a finite device.
>
> > I don't understand. Are you saying that you are not arithmetically
> > biased or that it's natural/unavoidable to be biased?
>
> I am not arithmetically biased. I have make some attempt for using
> combinators in the place of numbers.
> I am "finitistically" biased, as any computationalist. But to handle
> finite things, we can works with numbers, or other Turing universal
> system.

I don't see why finite has to be equated with computable though. Red
is finite, but not computable. Hilarious is finite, but not
quantifiable.

>
>
>
> >>>> You are the one talking like if you knew (how?) that some theory
> >>>> (mechanism) is false, without providing a refutation.
>
> >>> What kind of refutation would you like?
>
> >> A proof that mechanism entails 0 = 1.
>
> > That demands that mechanism be disproved mechanically,
>
> Not necessarily. But indeed, it is better that the process of
> verification of the proof, even if informal, can easily be thought as
> capable of being formalized, so that anyone can, with enough patience,
> be convinced.

That excludes any truths which require the voluntary participation of
the thinker to convince themselves. This is the crux of the problem
with mechanism. It is predicated on a voyeuristic ontology, which I
stipulate is theoretically possible, but is quite literally impossible
to realize. To 'prove' that you exist is to eat your own head, tail
first. Rather than trying to dodge that as an anomaly, I think you
have to build the hypothesis from that point as the foundation.

Just as the revelations of Galileo and Darwin required a braver
confrontation with 'what actually is the case' than would earlier
religious reckonings allow, these new understandings make empiricism
seem lazy and cowardly. 'Convince me.' they say. 'Make it so I am
overpowered by the evidence and have no ability to resist no matter
how hard I try'.. The perfect ethos to usher in an era of
patriarchical industrial conquest. The thing is that this ethos is
played out. That era has peaked already, and no has devolved into it's
decadent self-absorbed period. The new truths are sense based.
Observation is the new existence. These truths you must meet half way.
You must reclaim your share of 'common sense' and rescue your
orphaned and disqualified subjectivity.

>
> > which gives an
> > indication of what the problem with it is, but you have to read
> > between the lines to get it. A literal approach has limitations which
> > arise from it's very investment in literalism.
>
> Lol. You might become a good lawyer.

Heh. I hate law too. Too much philosophy ;)

>
>
>
> >> Note a personal opinion according to which actual human machines are
> >> creepy.
>
> > Not sure what you mean. Individual humans can certainly seem creepy,
> > but I'm talking about there being a particular difference in our
> > perception of living things vs non-living things which imitate living
> > things. Even true of plants. Plastic plants are somewhat creepy in the
> > same way for the same reason. I don't think that it can be assumed
> > therefore that humans are only machines. They may be partially
> > machines, but machines may not ever be a complete description of
> > humanity.
>
> There is no complete description of humanity, nor is there any
> complete theory of what are and can be machine.
> For the nth times, you are just showing prejudice.

Just because they both cannot be described completely doesn't mean
that they intersect.

>
> You think like that:
> axiom: machines are necessarily stupid, you tell me that I might be a
> machine, so you tell me that I might be stupid.

Not at all. I know you think that's what I think but it's your
prejudice against my position. Machines are much smarter than us at
some things, not as smart at others, and not capable at all for still
other things. They are just different. Just like not all plants are
edible. It's not to say that plants which are inedible to us are less
than food, just that they aren't food for us.

>
> I think like that:
> axiom: I might be not stupid. You tell me that I am a machine. Nice,
> some machine might be non stupid.

I understand that, but you don't see that I've already been there done
that. I used to subscribe to that perspective too. In theory it makes
perfect sense. We are sort of badly wired robots bumping into each
other (and that is of course true), so it makes sense that a really
nicely designed robot would be a big improvement. In practice though,
there is something missing. Something subtle from a 3-p perspective,
but quite significant from a 1-p perspective. If the theory were
correct, that should not be the case and even the simplest logical
program should give us warm inviting feelings - a deep comfort like
hearing a human voice on the other end of the phone instead of a
voicemail system when we really need help. It's not like that though.
Mechanistic perfection leaves many people with a cold and sterile
feeling. Not in spite of it's perfection, but in spite of our
theoretical assumptions of perfection. Like medieval medicine, we just
don't have it right yet. We are convinced that we are going in the
right direction, again and again true progress seems to elude us on
every front.

>
>
>
> >>> Mechanism is false as an
> >>> explanation of consciousness
>
> >> Mechanism is not proposed as an explanation of consciousness, but
> >> as a
> >> survival technic. The explanation of consciousness just appear to be
> >> given by any UMs which self-introspect (but that is in the
> >> consequence
> >> of mechanism, not in the assumption). It reduces the mind-body
> >> problem
> >> to a mathematical body problem.
>
> > Survival of what?
>
> Of your soul.

Ohh. In my view survival of the soul may be a foregone conclusion if
we understand that the singularity is the universe with all of the
time and space nullified.

>
> > It sounds like you are saying that consciousness is
> > just a consequence of being conscious, and that this makes the mind
> > into math.
>
> The assumption: I can survive with a digital brain, like I can survive
> with an artificial heart.

The problem is that you aren't part of your heart, but you are part of
your brain.

> The consequence: an explanation of both mind and matter can be
> extracted from addition and multiplication.

But what is addition and multiplication extracted from?

>
>
>
> >>> because I think that consciousness arises
> >>> from feeling which arises from sensation. Perception cannot be
> >>> constructed out of logic but logic always can only arise out of
> >>> perception.
>
> >> Right. But I use logic+arithmetic, and substituting "logic
> >> +arithmetic"
> >> for your "logic" makes your statement equivalent with non comp. So
> >> you
> >> beg the question.
>
> > I don't think that perception can be constructed out of logic
> > +arithmetic either, but logic+arithmetic are covered under perception.
>
> But this is what we are expecting an explanation for. Again, you are
> just saying that for *you* it seems obvious that a machine cannot be
> conscious in virtue of processing the relevant information.
> But in this field NOTHING is obvious.
> And usually, people pretending to be sure on those matter, have slowed
> the progress, when not torturing those who dare to doubt.

I'm the one daring to doubt. In theory it isn't obvious that a machine
cannot be conscious (independent of it's material enactment, which
will provide whatever awareness can be utilized), but in practice, it
does not at all appear to be the case. If it were we wouldn't be
struggling to build faster smarter machines, we would just stick them
in tanks of warm bubbly mineral oil and let them lead the way.

>
> >>>> Who we?
>
> >>> We humans, or maybe even we animals.
>
> >> Then it is trivial and has no bearing on mechanism. The machine you
> >> can hear are, I guess, the human made machine. I talk about all
> >> machines (devices determined by computable laws).
>
> > I would say that there are no devices determined by computable laws
> > alone. They all have a non-comp substance component that contributes
> > equally to the full phenomenology of the device.
>
> That is right, and is a non trivial (rarely understood) consequence of
> the comp hypothesis. Any piece of matter has to obey to the statistics
> coming from the first person indeterminacy, and the presence of oracle
> in the UD* (the arithmetical running of the UD) entails a priori some
> non computable feature sustaining the stability of that piece of
> matter. Indeed, it is an open problem if the no-comp aspect is not
> more important than the one we can already infer from observation
> (like in QM or Q Many Worlds).
> So again, that alley will not work for refuting comp.

So if you are ok with non-comp substance being non-trivial to
computation, then how can you know that there is an objectively true
substitution level independent of substance?

>
>
>
> >>>> All what I hear is "human made machines are creepy, so I am not a
> >>>> machine, not even a natural one?".
> >>>> This is irrational, and non valid.
>
> >>> I'm not saying that I'm not a machine, I'm just saying that I am
> >>> also
> >>> the opposite of a machine.
>
> >> This follows from mechanism. If 3-I is a machine, then, from my
> >> perspective, 1-I is not a machine.
>
> > I think it's a continuum. Some parts of 1-I are more or less
> > mechanical than others, and some 3-I machine appearances are more or
> > less mechanical than others. Poetry is an example of a 1-p experience
> > which is less mechanical than a 1-p experience of running in place. A
> > rabbit is less mechanical of a 3-p experience than a mailbox. Do you
> > agree or do you think it must be a binary distinction?
>
> You might be right, and comp makes it possible to make this testable.
> It is not binary, given the 4+4*infinity internal person points of
> view accessible to machines.

Ok, cool.
>
>
>
> >>> It's not based upon a presumed truth of
> >>> creepy stereotypes, but the existence and coherence of those
> >>> stereotypes supports the other observations which suggest a
> >>> fundamental difference between machine logic and sentient feeling.
>
> >> Logic + arithmetic. The devil is in the detail.
>
> > Why would the addition of arithmetic address feeling?
>
> Technically, addition is not enough, but addition and multiplication
> (of integers, not of real numbers!) is enough to get universal löbian
> machine, and they have rich introspective abilities. They have
> feelings, provably so if you accept some definition of feeling of the
> literature (especially in the Theaetus-Plato-Plotinus family).

I think all definition of feeling is probably inherently incomplete.
It can only be defined in 1-p.

>
>
>
> >> Define "ontological complement to electromagnetic relativity." Please
> >> be clear on what you are assuming to make this concept sense full.
>
> > Ontological complement, meaning it is the other half of the process or
> > principle behind electromagnetism and relativity (which I see as one
> > thing;
>
> So you assume physics.

Not as independent of perception, but yes as a description of
existential phenomena from our perspective, sure physics is valid.

>
> > roughly 'The Laws of Physics' which I see as 3-p, mechanical,
> > and pertaining to matter and energy as objects rather than
> > experiences).
>
> Hmm. I can make sense, with a lot of works.

Cool!
>
> > When we observe physical phenomena in 3-p changing and
> > moving, we attribute it to 'forces' and 'fields' which exist in space
> > but within ourselves we experience those same phenomena as feelings
> > through time (sense) which insist upon our participation (motive).
>
> Here I see just a variant of the usual identity thesis. I don't see
> any explanation of what is mind, nor matter.
> At least serious Aristotelian philosophers of mind agree that the mind-
> boy problem is far from having a solution.

Mind and matter are opposite ends of an involuted continuum of 'common
sense', which is the singularity.

>
>
>
> >>> Poetry is your term that you injected into
> >>> this.
> >>> I was just confirming your intuition that poetry is an example
> >>> of how sensorimotive phenomena work - figurative semantic
> >>> association
> >>> of qualities rather than literal mechanistic functions of quantity.
>
> >> You were then just eluding the definition of sensorimotive. You
> >> continue to do rhetorical tricks.
>
> > I'm not eluding the definition, I am saying that by definition it
> > cannot be literally defined. It is the opposite of literal - it is
> > figurative. That's how it gets one thing (I/we) out of many (the
> > experience of a trillion neurons or billions of human beings).
>
> Comp explains a lot, and give rise to precise technical problem. You
> are doing the old trick : "don't ask, don't search".
>
>
I'm sure that comp does explain a lot, but so does any advanced system
of inquiry. It's not a trick, you have to account for figurative
dynamics if you are going to include consciousness in your cosmos.

>
> >>>> I find a bit grave to use poetry to make strong negative
> >>>> statement on
> >>>> the possibilities of some entities.
>
> >>> That's because you are an arithmetic supremacist,
>
> >> I assume things like 17 is prime!
>
> > I have no problem with 17 being prime, of course that is true.
>
> What a relief. I am serious. Sometimes discussion on comp with non-
> comp people end up on differing on that question.

Haha, nah, I'm very conservative on meddling with existing truths, I
only want to revise what is absolutely necessary to get to the
reinterpretation of the big picture - which is not really very much.
Just a sensorimotive primitive instead of 'energy'.

>
> > I would
> > even say that the kinds of truth arithmetic sensorimotives present is
> > supremely unambiguous,
>
> Well, technically, they still are. We just cannot define the numbers.
> All reasonable axiomatic of numbers have some intrinsic fuzzyness, and
> bizarre object, clearly NOT numbers still verify the axioms. But OK.
> It is a bit beyond the scope of the discussion here.
>
> > but I think that conflating unambiguity with
> > universal truth is an assumption which needs to be examined much more
> > carefully and questioned deeply.
>
> The conflation is the result of assuming that the brain works like a
> natural material machine.

The brain does work like a natural material machine, but the person
using the brain picks up where the brain leaves off, extending into
the far reaches of never-never-mechanism land.

>
> > What would unambiguous facts be
> > without ambiguous fiction? Not just from a anthropocentric point of
> > view, but ontologically, how do you have something that can be
> > qualified as arithmetic if nothing is not arithmetic?
>
> Since Gödel 1931, or just by Church thesis, I can assure you that
> arithmetic is beyond human imagination, even without assuming comp.
> And besides, arithmetic can explain why, seen from inside, real non
> arithmetical appearances grow.
> Like Rieman use complex analytical theory to study the prime number
> distribution, we know that the relation between numbers can reflect a
> mathematical reality which is beyond numbers.

Hmm. a mathematical reality beyond numbers. If that's the case, then
why call it arithmetic? Why not call it something like sense?

>
> > Arithmetic
> > compared to what? What can it be but life, love, awareness, qualia,
> > free will?
>
> Assuming comp (or not) we can say that arithmetic is full of life,
> love, awareness, qualia, and ... quanta. Indeed, it is a reason to
> find mechanism plausible.
>
I don't know what your reason for saying that is, other than you have
already concluded in advance that it must be that way or that it could
be that way. I don't experience anything to suggest that it is that
way in practice.
>
>
> >>> so therefore cannot
> >>> help yourself but to diminish the significance of subjective
> >>> significance.
>
> >> On the contrary, mechanism single out the fundamental (but not
> >> primitive) character of consciousness and subjectivity. You are the
> >> one who dismiss the subjectivity of entities.
>
> > It singles it out as just another generic process so that a trash can
> > that says THANK YOU stamped on the lid is not much different from a
> > handwritten letter of gratitude from a trusted friend. I don't dismiss
> > the subjectivity of any physical entity, I just suspect a finite range
> > of perceptual channels which scale to the caliber of the particular
> > physical entity or class of entities.
>
> But a self-referential Löbian machine is not a trash.
>
You could make one out of trash, couldn't you?
>
>
> >> Gödel's theorem would have convince nobody if the self-reference he
> >> used was based on 1p.
>
> > Why not? It's just intersubjective 1p plural. The 1p that we share
> > with the least common denominators of existence.
>
> No. You would be right for physics, here, but not for arithmetic.
> Gödel's theorem convince everyone because the self-reference used in
> his proof are 3p communicable. They are of the type 1+1=2, or 17 is
> prime.
>

1+1=2 is still intersubjective 1-p, it just has a low substitution
level. A very common sense. It doesn't exist independently of the
thinkers thinking it though.

>
>
> >> This is only one reason among an infinity of
> >> them. If you believe some 1p is used there, you have to single out
> >> where, and not in the trivial manner that all 3p notion can be
> >> understood only by first person. Gödel's self-reference is as much
> >> 3p
> >> than 1+1=2.
>
> > 1+1=2 is 1-p also.
>
> Also, yes. But the point is that this is not used in Gödel's proof.

It's still a human concept. You have to be a human to understand it.
It's ok for it to be different from trash, but why can't it also be
different from human consciousness?
I'm saying that we can only ever see the parts of everything that are
ours to study.

>
> >>>>> Because an instruction has no 3-p existence.
>
> >>>> Ah?
>
> >>> It is not enough to have an instruction sequence, the instruction
> >>> must
> >>> be executed as physical energy upon a material object (even if it's
> >>> our own brain chemistry) if it is to have any 3-p consequence.
>
> >> Not at all. You confuse implementation and physical implementation.
> >> Even without comp, a physical implementation is just a paricular
> >> example of implementation.
>
> > Then you are asserting a zombie implementation.
>
> You beg the question. You light just say that you postulate non-comp.
> I am not sure that you need this axiom, which is awfully complex to
> make precise.
>
Isn't it inconsistent though to say that you can't have a human body
without there being a human experience but you can have a disembodied
program?
>

>
> >> I was saying "with a "t", not with a "s", for the word "intentional",
> >> which of course has a different meaning than the "intensional" of the
> >> logicians.
> >> (I do agree with Hintikka that "intensional" and "intentional" are
> >> related concept, though, but that is another topic).
>
> > I still don't get it. I'm saying that projecting human sense intention
> > into a machine is anthropomorphizing.
>
> If you say "my car is tired", it might be anthropomorphizing. If you
> refuse to give a steak to my sun in law because he got an artificial
> brain, then you are doing racism.
>
haha, well yeah it's not my place to discriminate. I was just being
metaphorical saying I have a restaurant. I think your son in law would
be the one who misses having a brain.
cool

(gotta continue with this later.. thanks)

Craig Weinberg

unread,
Sep 24, 2011, 8:51:37 PM9/24/11
to Everything List
(next installment)

On Sep 23, 3:17 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:

> On 23 Sep 2011, at 02:42, Craig Weinberg wrote:

>
> > It is a comparison made by a third
> > person observer of a human presentation against their expectations of
> > said human presentation. Substitution 'level' similarly implies that
> > there is an objective standard for expectations of humanity. I don't
> > think that there is such a thing.
>
> It all depend what you mean by "objective standard".

That there is some kind of actual set of criteria which make the
difference between human and non human. Some apes probably have more
human qualities than some humans, and some artificial brain extensions
will probably have more human qualities than others. I don't think
that it's likely to be able to replace a significant part of the brain
with digital appliances though. I would compare it with body
prosthetics. An organ here or a limb there, sure, but replacing a
spinal cord or brain stem with something non-biological is probably
not going to work.

> I'm afraid that in the near (but not so near future) rich people will
> have lower level digital brain than poor people.
> And, probably by a sort of intrinsic supersitution, the human will
> build ever lower subts-level digital brain.
> Some will be "enlightened" as be glad with very high susbt level, or
> just accept having no brain, and lose interest for manifesting them on
> this branch or reality.
> Even if 100% of humanity bet on comp, there will be a vast amount of
> variety of human implementation of that idea. In fact comp implies
> less norm, less evidences, more questions, more possibilities.
>
>
>
> >>> There is no zombie, only prognosia/HADD.
>
> >> If there is no zombie, then non-comp implies an infinitely low level.
>
> > No, it's not that zombies can't theoretically exist, it's that they
> > don't exist in practice because the whole idea of zombies is a red
> > herring based upon comp assumptions to begin with.
>
> The technical notion of zombie does not rely on comp. It is just a
> human, acting normally, but which is assumed to be without any inner
> life. Non-comp + current data makes them plausible, which is an
> argument for comp.

I think that the whole premise is too flawed to be useful in practical
considerations. It posits that there is a such thing as 'acting
normally'. The existence of sociopathy indicates that there are
naturally occurring 'partial zombies' already to the extent that it
means anything, but that the concept or p-zombies itself assumes that
human 'normalcy' can be ascertained by observing moment to moment
behavior rather than over a lifetime. A fully digital person, like
digital music, may satisfy many of the qualities we associate with a
person, but always carry with them a clear inauthenticity which seems
aimless and empty. If they are simulating a person who is already like
that, then it could be said that they have achieved substitution
level, but it's not really a robust test of the principle.

>
> > If you don't assume
> > that substance can be separated from function completely, then there
> > is no meaning to the idea of zombies. It's like dehydrated water.
>
> I am rather skeptical on substance. But I tend to believe in waves and
> particles, because group theory can explain them. But I don't need
> substance for that. And with comp, ther is no substance that we can
> related, even indirectly, to consciousness. I see the notion of
> substance as the Achilles' heel of the Aristotelian theories.
>
But if you are saying that zombies cannot exist, doesn't that mean
that positing a substance that automatically is associated with a
particular set of functions. Otherwise you could just program
something to behave like a zombie.

To say that comp prevents zombies is actually a self-defeating
argument I think. It seems to violate the principle of universal
emulation so that you could not, for instance have one digital person
which was the virtualized slave of another, because the second digital
body would be, in effect, a zombie. This seems to inject a special
case of arbitrary Turing limitation. Consider the example of remote
desktop software, where we can shell out one computer to another. What
happens to the host computer's 'consciousness'? Does it not become a
partial zombie, unable to locally control it's behavior?

>
>
> >>> There is no
> >>> substitution 'level', only a ratios of authenticity.
>
> >> ?
>
> > Say a plastic plant is 100% authentic at a distance of 50 years by the
> > naked eye, but 20% likely to be deemed authentic at a distance of
> > three feet. Some people have better eyesight. Some are more familiar
> > with plants. Some are paying more attention. Some are on
> > hallucinogens, etc There is no substitution level at which a plastic
> > plant functions 100% as a plant from all possible observers through
> > all time and not actually be a genuine plant. Substitution as a
> > concrete existential reality is a myth. It's just a question of
> > arbitrarily fixing an acceptable set of perceptions by particular
> > categories of observers and taking it for functional equivalence.
>
> An entity suffering in a box, does suffer, independently of any
> observers.

A box can contain a body, but it's not clear that it can contain the
experience with that body. Sensory isolation in humans leads to rapid
escape into the imagining psyche. But if you want to stick with a flat
read of the example, we could say that the box is an observer, at
least to the extent that it's existence must resist the escape of the
trapped entity.
>
>
>
> >>> The closer your
> >>> substitute is to native human sensemaking material, the more of the
> >>> brain can be replaced with it, but with diminishing returns at high
> >>> levels so that complete replacement would not be desirable.
>
> >> That is even worst. This entails partial zombies. It does not make
> >> sense. I remind you that zombie, by definition, cannot be seen as
> >> such
> >> by their behavior at all.
>
> > That's the theoretical description of a zombie. Like dehydrated water.
> > In reality, one observer's zombie is another observer's non-descript
> > stranger in the park. There is no validity to these observations
> > relative to the would-be zombie's quality of subjectivity.
>
> We are always talking in a theory. Reality is what we search.

My point was that zombies as they are described cannot exist, and that
the real world principle the device of zombiehood is intended to
defeat is not even addressed.
It would be more interesting to use the clock-wheel version so that
people could see the invalidity of the argument ;)

>
>
>
> >>>> Everyone agree that if the level is infinitely low, then current
> >>>> physics is false. To speculate that physics is false for making
> >>>> machine stupid is a bit of far stretching.
>
> >>> Physics isn't false, it's just incomplete.
>
> >> No, it has to false for making the substitution level infinitely low.
> >> *ALL* theories, including the many one trying to marry gravitation
> >> and
> >> the quantum entails its Turing emulability.
>
> > The substitution level isn't infinitely low, it's just not applicable
> > at all. There is no substitution level of white for black, lead for
> > gold, up for down, etc. I doubt the objective existence of
> > 'substitution'. Substitution is an interpretation - not necessarily a
> > voluntary one, but an interpretation nonetheless.
>
> So, what will you say if your daughter accept an artificial brain?
> Substitution is an operational term, like castration, lobotomy, etc.

I would say she would be committing suicide, unless the technology had
already been tested with people being gradually offloaded and reloaded
into their own brains to verify the retention of consciousness.
Honestly I don't think it's going to come to that. I think the
limitations of mechanism to generate human sentience will be revealed
experimentally long before anyone considers replacing an entire brain.

>
>
>
> >>> A good Eurocentric map of
> >>> the world before the Age of Discovery isn't false, just not
> >>> applicable
> >>> to the other hemisphere.
>
> >> The analogy fails to address the point I made.
>
> > If the point you made is that physics has to be false if the human
> > psyche has no substitution level, then my analogy is that a map of
> > known territory (physics) doesn't have to be false just because it
> > doesn't apply to an unknown territory (psyche).
>
> Physics is not false. But physicalism, or weak materialism, is
> incompatible with mechanism.
>
I don't so much get into philosophical conventions. Mechanism,
physicalism, materialism etc, are just splinters of a single faith to
me. They are all rooted in 1-p supervenience ontological assumptions,
which I don't use. My view is not dualism because 1-p subjectivity is
neither substance nor-non substance, but rather it is the perceptual
experience of the sensorimotive energy within and between substance.
It cannot be conceived of properly as an object or noun. Terms like
'soul' or 'consciousness' are necessary for us to represent it
linguistically, but the actual referent is a verb. It is to feel,
experience, and do.

>
>
> >>>>> It seems far from
> >>>>> scientific at this point to dismiss objections to an arbitrary
> >>>>> physical substitution level.
>
> >>>> With all known theories, there is a level. To negate comp you must
> >>>> diagonalize on all machines, + all machines with oracles, etc. I
> >>>> think
> >>>> you misinterpret computer science.
>
> >>> I'm not trying to interpret computer science, I'm trying to
> >>> interpret
> >>> the cosmos.
>
> >> Well, if there is a cosmos, there are evidence that some computers
> >> belongs to it. You can't brush them away.
> >> The cosmos does emulate computers, and computers can emulate cosmoses
> >> (but not the whole physical reality, by UDA).
>
> > I don't brush them away, I just say that it's not so simple as psyche
> > = computer. Computation can be accomplished with much less psyche than
> > our perception of that computation might imply.
>
> 1) comp does not say that psyche = computer, just that psyche can be
> manifested genuinely by a computer. The psyche itself is in the
> internal view of arithmetic, and are not even arithmetical.

I think we are saying almost the same thing except that you are
assuming a freestanding arithmetic as primitive, whereas I see the
internal view you refer to as not a view of arithmetic but a view of
substance, and that arithmetic is one category of relations which
arise from that involuted dualism. I think that I'm right. These
conversations are only making me more and more certain of that. Think
about what is psyche's opposite. It is not arithmetic. It's not the
mind/math problem. The dualism is always with MATTER or BODY. To me,
arithmetic is mind asymptotically approaching body/matter isomorphism,
and comp is mind simulating body simulating mind's reflection. I think
if you can really consider this fairly and scientifically, you should
be able to see that this is a deeper, broader truth than comp.

> 2) It is true that computation needs much less than psyche, indeed, it
> does not need psyche, but that is why comp is a real explanation: it
> explains the existence of psyche (what the machine thinks about)
> without assuming psyche.
> You say comp is false, because you believe that we can explain psyche
> only by assuming psyche. What you say is "psyche cannot be explained
> (without psyche).

That psyche cannot be explained is only one factor, and not the most
important one, which leads me in the direction of comp being false.
Some others are:

1) I am compelled by the symmetry and cohesiveness of a Sensorimotive-
Electromagnetic Perceptual Relativity rooted in matter, space, and
entropy as the involuted consequence of energy, time, and
significance.

2) I am compelled by our naive perception of being 'inside of' our
physical heads and bodies, rather than inside of a disembodied logical
process - even with a simulation hypothesis, our ability to experience
varying degrees of sanity and dissociation rather than a real world
which is indistinguishable from a structured dream.

3) I am compelled by the transparency of our proprioceptive
engagement. Even though our perception can be shown to have a
substitution level, our ordinary experience is quite reliable at
informing the psyche of it's material niche. We don't usually
experience dropouts or pixelation, continuity errors, etc. It's not
perfect, but our ability to communicate with each other across many
different logical encodings and substances without any other entity
interfering is a testament to the specificity of human consciousness
to the precise fusion of physical neurology and psychic unity.

4) All of the aesthetic hints bound up in our fictions of the unlive
and the undead, as well as the stereotypes of cold, empty mechanism.
Consistent themes in science fiction and fantasy. Again suggesting a
mind-body pseudo-duality rather than an arithmetic monism.

5) The clues in human development, with childhood seeing innate
grounding in tangible sensorimotive participation rather than
universal, self-driven sui generis mathematical facility. It takes
years for us to develop to the point where we can be taught counting,
addition, and multiplication.

6) The lack of real world arithmetic phenomena independent of matter.
I think that arithmetic seems like an independent epistemology because
it is a distillation of the kind of orders and symmetries which we
share with matter, and as such is both distant from the most
subjective experiences of the psyche, but also non-corporeal since it
is in fact a sensorimotive projection. It's basically a dimension of
literal sense we can experience with matter, but it is still just one
channel of sense which will not automatically give rise to non-
theoretical phenomenologies.

>
>
>
> >> 98% of the scientist are wrong on the consequence of comp. They use
> >> it
> >> as a materialist solution of the mind-body problem. You are not
> >> helping by taking their interpretation of comp as granted, and
> >> talking
> >> like if you were sure that comp is false. Why not trying to get the
> >> correct understanding of comp before criticizing it?
>
> > If 98% of scientists who study comp are wrong about it's consequences,
> > what chance would I have of beating them at their own game? It's not
> > that I know comp is false, it's that I have a different hypothesis
> > which recontextualizes the relation of comp to non-comp and makes more
> > sense to me than comp (or my misunderstanding of comp).
>
> It is your right. I just do not follow your argument against comp.

Which one?

> You might as well use UDA to say that comp implies non-materialism,
> and I postulate matter, so I postulate non-comp.
> The problem, for me, is that such a move prevents the search for an
> explanation of matter, and mind.

I don't understand what is the appeal of making arithmetic unexplained
and primitive instead.

>
>
>
> >>> I have confidence in the relation between
> >>> comp and non-comp. That is the invariance, the reality, and a theory
> >>> of Common Sense.
>
> >> comp gives a crucial role to no-comp.
>
> > Meaning that it is a good host to us as guests in it's universe. I
> > don't think that's the case. This universe is home to us all and we
> > are all guests as well (guests of our Common Sense)
>
> ?
>
It makes us strangers in an arithmetic universe.
I mean that UDA seems to underestimate the reality of reality.
>
>
> >>> It needs
> >>> fluids - water, cells.
>
> >> Clothes.
>
> > Would you say that H2O is merely the clothes of water, and that water
> > could therefore exist in a 'dehydrated' form?
>
> Sure. I do this in dreams. Virtual water gives virtual feeling of
> wetness with great accuracies.

Virtual water doesn't do all of the things that real water does
though. It's just a dynamic image and maybe some tactile sense. It
doesn't have to boil or evaporate, doesn't quench thirst, etc. I agree
that some of our sense of water is reproduced locally in the psyche,
but it is clearly a facade of H2O.

>
>
>
> >>> Something that lives and dies and makes a mess.
>
> >> Universal machine are quite alive, and indeed put the big mess in
> >> platonia.
>
> > What qualities of UMs make them alive?
>
> The fact that they are creative, reproduce, transform themselves, are
> attracted by God, sometime repulsed by God also, and that they can
> diagonalize against all normative theories made about them. And many
> more things.
>

It sounds worthwhile but I would need to see some demos and
experiments dumbed down for laymen to have an opinion.
What is the point of agreeing with a tautological statement? Why would
one proof be enough? Is this science or church? You just said all of
this great stuff that UM can do which is just like us, and then your
one example of this is you and me ourselves? What is the point of
saying that we are like ourselves and what would that have to do with
supporting mechanism?

>
>
>
> >>> How does the brain understand these things if it has no access to
> >>> the
> >>> papers?
>
> >> Comp explains exactly how things like papers emerge from the
> >> computation. The explanation is already close to Feynman formulation
> >> of QM.
>
> > Unfortunately this sounds to me like "Read the bible and your
> > questions will be answered."
>
> Read sane04.http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHALAbstract...
>

I have. I like it but I can only get so far and I like my own ideas
better.

>
> >>>> But you don't seem serious in "arguing" against comp, and admitting
> >>>> you don't know anything in computer science.
>
> >>> Oh I freely admit that I don't know anything in computer science. My
> >>> whole point is that computer science only relates to half of
> >>> reality.
>
> >> I don't know anything about X. My whole point is that X only do this.
> >> But if you admit knowing nothing about X, how can you derive anything
> >> about X.
> >> You are just confessing your prejudice.
>
> > I don't know anything about ship building but I know that it only
> > concerns seafaring and not aerospace. I think that being a master
> > shipwright could very well present a significant obstacle to
> > pioneering in aerospace.
>
> That's not an argument. At the most a hint for low level substitution.

It's an example of why arguments from authority to not compel me in
this area.

>
>
>
> >>> I'm not trying to make the universe fit into a computer science
> >>> theory. I only argue against comp because it's what is distracting
> >>> you
> >>> from seeing the bigger picture.
>
> >> I show, in short that comp leads to Plotinus. If that is not a big
> >> picture!
> >> Comp explains conceptually, and technically, the three Gods of the
> >> greek, the apparition of LUMs and their science and theologies, the
> >> difference between qualia and quanta, sensation and perception,
> >> perception and observation.
>
> > I believe you but i get to those things without vaporizing substance.
>
> Which means you are affectively attached to the bullet of Aristotle.
> Substance is an enigma. Something we have to explain, ontologically,
> or epistemologically.

I think that I have explained substance. It is the opposite of
perception over time. Perceptual obstacles across space. Together they
form an involuted pseudo-dualism.

>
>
>
> >> You just criticize a theory that you admit knowing nothing about.
> >> This
> >> is a bit weird.
>
> > My purpose is not to criticize the empire of comp, it is to point to
> > the virgin continent of sense.
>
> So you should love comp, because it points on the vast domain of
> machine's sense and realities.

I do love it in theory. It's a whole new frontier to explore. It's
just not the one I'm interested or qualified to explore.

> On the contrary, totally honest rational materialist can't hep
> themselves of not going toward soul and person elimination. Have you
> read Churchland, even Dennett somehow.

Yeah, I understand where they are coming from, I just feel like I am
5-25 years ahead of them. This interview I thought gives a little
glimpse into non-fanatical rational materialism:
http://wunc.org/tsot/archive/The_Nature_of_Conciousness.mp3/view
It's inferred introspection though. I don't think that can be
primitive. I don't buy Pinocchiopoiesis.

>
> > The 3-p
> > view of schematizing the belief of a thing is a second order
> > conception to the 1-p primitive of what it is to feel that one *is*.
>
> Well, not in the classical theory of beliefs and knowledge.
>
> > It's an experience with particular features - a sensory input and a
> > motive participation. Without that foundation, there is nothing to
> > model.
>
> That's unclear. The "p" in Bp & p might play that role, as I thought
> you grasped above.

You can't start with doubting the self, because logically that would
invalidate the doubt and fall into infinite regress. It's not even
possible to consider because the Ur-arithmetic would have nothing to
experience it.

>
>
>
> >>>>>>> Give me one example, one common sense metaphor,
> >>>>>>> one graphed function that could suggest to me that there is any
> >>>>>>> belief, feeling, or awareness.going on.
>
> >>>>>> The fact that the universal machine remains silent on the deep
> >>>>>> question
>
> >>>>> What deep question?
>
> >>>> 'are you consistent?", "do you believe in a reality", "do you
> >>>> believe
> >>>> in after life", etc.
>
> >>> Have you considered that it's silent because it's not equipped to
> >>> answer the question?
>
> >> yes, but it does not work. The machine cannot answer the question for
> >> deeper reason, that she can find and expose.
> >> For example the machine remains silent in the question "are you
> >> consistent", but later she can say that "If I am consistent, then I
> >> will remain silent on the question of my consistence".
>
> > Meh. It sounds like asking a spirit 'if you can't hear me, do NOT
> > knock three times'
>
> No. It is more like if you ask a spirit a too intimate question. On
> another question, he does knock three times, and then he can explain
> why he did not knocked it earlier.

It makes no sense that all machine spirits would be so touchy about
their religious beliefs. Would that mean that any person being
digitally simulated by a UM would also be unable to answer
philosophical questions?
By themselves? You think we are going to soon discover manufacturing
plants popping up in the woods with robot resistance fighters building
new models of themselves?
That's why that approach is not much better than religion as far as
describing the fundamentals of the cosmos or psyche.
I understand that you see that, but that's because you are looking at
my ideas through the lens of your ideas. You would have to actually
consider my ideas on their own to see why subst level is not
applicable. There is no subst level for reality, only for perceptual
subsets of reality.

>
> > I say that
> > substitution level does not apply. I think that to prove substitution
> > level exists
>
> Comp implies that no one can prove it exists. No machine can know for
> sure its substitution level, even after a successful teleportation.
> She might believe that she has 100% survived but suffer from
> anosognosia.

I can understand what you are saying, and I agree that it is a good
way of modeling why a self-referencing entity would not be able to get
behind itself, but it seems like a contradiction. If you say that we
are machines, then you are saying that we cannot know for sure our
substitution level, which is exactly what you are criticizing me on.
If a machine cannot know for sure their subst level, does it know for
sure that the level is not infinite? If not, then comp itself is not
Turing emulable?

>
> > it would need to be shown that there is some phenomenena
> > which can be substituted to the point that there is no possibility
> > from anything at any time distinguishing it from the genuine original.
> > Even taking perceptual frames of reference off the table (which is the
> > stronger truth), all that is necessary is for something to exist which
> > has a memory of a time before the substitution was instantiated. If I
> > have a video tape of someone replacing a brain with an artificial
> > brain, then the artificial brain has the quality of being disbelieved
> > by me as the genuine brain, and there is nothing that the person can
> > do or not do to compensate for that, therefore the substitution level
> > fails. I have the choice of how I want to treat this person after the
> > surgery, I can reveal my knowledge to employers, neighbors, etc, and
> > that will change the course of the individual's life in ways which
> > would not occur had the surgery not taken place.
>
> The same problem might occur for someone smoking cannabis, but this is
> not an argument for saying that we don't survive the cat of smoking a
> joint.
> Of course, if something is illegal, be it smoking grass or using
> teleporters, ending in jail implies some change, but usually, even in
> that case, we say that people survived. Of course if you apply the
> death penalty ... The substitution failed because the poor guy end up
> killed by the anti-comp. If *that* is your notion of failure. Well,
> thanks for the warning!
>
I just think when it comes to consciousness, we cannot be sure that
it's identity is not directly coordinated to timespace-massenergy. My
hypothesis suggests that in fact it is.

>
>
> >>> Comp isn't false, it just doesn't recognize
> >>> the contribution of the non-comp substrate of computation,
>
> >> It does. I insist a lot on this. Comp is almost the needed philosophy
> >> for curing the idea that everything is computable.
> >> Please study the theory before emitting false speculation on it.
>
> > So you are saying that comp supervenes on or is equally fundamental as
> > non-comp?
>
> Arithmetical truth can be partitioned into level of complexity,
> sigma_1, sigma_2, sigma_3, etc...
> The computable is sigma_0 and sigma_1. Above it is uncomputable. Most
> meta-properties on the sigma_1 are above sigma_1.
> The numbers relations escapes the computable, and to make a theory of
> the computable, we cannot avoid excursions in the non computable. We
> can always prove that a machine stop without leaving the sigma_1, but
> to prove just that some machine will not stop, is a quite another
> matter.
>
Does comp explain why sigma_2 becomes uncomputable, and what
computability actually is?

>
> >>> so it's not
> >>> applicable for describing certain kinds of consciousness where non-
> >>> comp is more developed.
>
> >> Consciousness and matter are shown by comp to be highly non
> >> computable. So much that the mind-body problem is transformed into a
> >> problem of justifying why the laws of physics seems to be computable.
>
> > I think they not only seem to be computable but they are computable,
> > and that this is due to how sensorimotive orientation works.
>
> Hmm... Then you can compute if you will see a photon in the up state
> starting from the superposition (up + down)?

No, a photon (if it existed, which I don't think it does) is
completely outside of our perceptual inertial frame. Which is why it
seems to do unusual things because we are seeing it secondhand though
photomultipliers or other equipment which cannot report to us anything
which cannot be represented in the very limited common sense we share
with glass and steel. Within our naive perceptual niche, our laws of
physics are computable.

>
> > It's not
> > just a solipsistic simulation, it's a trans-solipsistic localization.
>
> You mean a first plural localization? Those are not computable,
> assuming either comp or QM.

Not necessarily plural, just that, for example, when I look at the sun
with my eye, there is a sun sense localization taking place within the
eyeball, retina, visual cortex, and perceiver. They are all different
materials and aspects of materials but they are all imitating, to the
extent that they can, the meaning of the 3-p event they observe. I
realize there is a certain sequence and logic to all of this from 3-p
which looks like chain reaction on microcosmic analysis, but from 1-p
it is a synchronized gestalt which is local but also concretely
entangles the many systems.

to be continued...

Craig

Craig Weinberg

unread,
Sep 25, 2011, 10:22:54 AM9/25/11
to Everything List
final part
I can see that there is a functional threshold between the life and
death or sleeping and waking of an individual organism, but are you
also saying that there are functional thresholds which define the
aliveness of entire species? It seems to me that death would be a
problem for comp. How does comp simulate irreversible deaths? Couldn't
you always run the tape backwards?

>
> > It's a matter of being alive like we human
> > beings are alive. No virus is capable of infecting all life forms but
> > no life form is immune from all viruses. All life forms are immune to
> > computer viruses though, and all computers are immune to all
> > biological viruses. I'm asking why would a human personality be any
> > more likely to inhabit a computer than a human virus?
>
> ? because a human virus has very limited range of possibilities,
> compared to a human.

That should make it even easier for a virus to inhabit a computer?

>
>
>
> >>>>> By comp, there should be no particular reason why a Turing machine
> >>>>> should no be vulnerable to the countless (presumably Turing
> >>>>> emulable)
> >>>>> pathogens floating around.
>
> >>>> They are no programmed for doing that. They are programmed to
> >>>> infect
> >>>> very particular organism.
>
> >>> If it's close enough to emulate the consciousness of a particular
> >>> organism, why not it's vulnerability to infections?
>
> >> Because it has different clothes, and virus needs the clothes to get
> >> the key for infecting.
>
> > What are the clothes made of? If arithmetic, then it's just a matter
> > of cracking the code to make a computer virus run in humans. Why
> > wouldn't a human brain be our clothes, so that we need it to get the
> > key for consciousness?
>
> That's follows from the UDA. But comp is assumed there (not proved).

(answers in the book...)

>
>
>
> >>>>> But of course that is absurd. We cannot
> >>>>> look forward to reviving the world economy by introducing medicine
> >>>>> and
> >>>>> vaccines for computer hardware. What accounts for this one-way
> >>>>> bubble
> >>>>> which enjoys both total immunity from biological threats but
> >>>>> provides
> >>>>> full access to biological functions? If computation alone were
> >>>>> enough
> >>>>> for life, at what point will the dogs start to smell it?
>
> >>>> Confusion of level.
> >>>> With comp, dogs already smell them, in some sense.
>
> >>> Not confusion of level; clarification of level. In what sense do
> >>> dogs
> >>> smell abstract Turing emulations?
>
> >> In the sense that the Universal Dovetailer generates all possible
> >> dogs
> >> in front of all possible smelling things, but with variate and
> >> relative measure.
>
> > Does it generate all possible smells as well, and if so, what is the
> > point of going through the formality of generating them?
>
> Well, that happens, once you assume that 0, 1, 2, 3, ... obeys
> addition and multiplication laws. Notably.
>
I don't see how addition and mutiplication become olfactory,
especially if they already have all possible olfactory potentials
within them. What would be the point of realizing them in a
simulation? Why does UDA even want to run itself? Is comp curious
about what it already knows?
I'm just using the terms in a standard way. Singularity is a little
more specialized, but I think pretty straightforward. It is The
Everything of which we Theorize Of.
>
>
> >>> It is a timespace signature
>
> >> What is a "timespace", what is a "signature".
>
> > Timespace is the container of events. It's the gaps between material
> > objects and the gaps between subjective experiences.
> > Signature is my figurative description of a condensed expression of
> > unique identity.
>
> >>> composed of sensorimotive mass-energy.
>
> >> Yu said "sensorimotive" = ontological complement to electromagnetic
> >> relativity
>
> >> explain "ontological complement to electromagnetic relativity mass-
> >> energy".
>
> > Electromagnetic relativity is a description of the phenomenology of
> > mass-energy. Mass energy is what it is, electromagnetism is what it
> > does in groups, and relativity is what groups of electromagnetic
> > groups do.
>
> > Sensorimotive perception is the ontological complement - the polar
> > opposite - the involuted pseudo-duality of electromagnetic relativity.
> > Sensorimotive phenomena are the experiencers and experiences which
> > comprise the 1-p interior of electromagnetism. Perceptions are the
> > inertial frames or worlds which group experiences and experiencers
> > together and comprises the 1-p interior of relativity.
>
> ?
Not sure how else to put it.

http://s33light.org/post/9370537190
>
>
> >>> It is the formalization
>
> >> ?
>
> > Realization.
>
> ?

Do you not recognize any difference between something actually
occurring and the idea of the possibility of it occurring?

>
>
>
> >>> of an
I don't understand what more needs to be explained. Something that is
not alive being taken for something living feels a certain way
(creepy, interesting, exciting, chilling, upsetting, etc) because
there is something specific to sense there that goes beyond mere
unfamiliarity. A new kind of rock is not creepy, but things like the
Heike Crab http://www.blog.speculist.com/archives/heike%20crab.jpg are
a bit unnerving to many people. How does comp account for something
like that feeling other than just dismissing it?

>
>
>
> >>>>> They aren't
> >>>>> just unfamiliar, they are the walking dead and unliving persons.
>
> >>>> Machines are not necessarily zombies.
>
> >>> Okay, we can call them meople or something if you like.
>
> >> This will not help.
>
> >>>>> They
> >>>>> are the antithesis of human life.
>
> >>>> So you say, without any argument. That confirms that it is a sort
> >>>> of
> >>>> racism.
>
> >>> Race has nothing to do with it. That just casts some kind of social
> >>> shaming into it. It's just a functional definition. Human life is
> >>> living organisms. The antithesis of that would be things which act
> >>> like organisms but have either never been alive (machines) or have
> >>> died already but continue to supernaturally perform superficial
> >>> ambulatory-predatory functions (zombies).
>
> >> I will eventually fall asleep.
>
> > But a machine will not.
>
> A patient machine will not!
>
You're making it sound as if the machine has a choice. If I make a
perfectly frictionless impervious engine, I don't think it has any say
in the matter. It's going to keep running as long as I want to keep
feeding fuel into it.

>
> >>>>>> By machine, I just means "turing-emulable" (with or without
> >>>>>> oracle).
> >>>>>> That include us, by mechanism assumption.
> >>>>>> It is a constant that novelist foresee the future(s).
>
> >>>>> What if 'emulation' is a 1-p hallucination?
>
> >>>> Why would it be like that?
>
> >>> Because it's an interpretation that varies from subject to subject.
> >>> You see a program thinking and experiencing, I see an inevitable
> >>> execution of unexperienced instructions.
>
> >> This is what we can see when we look at brain.
>
> > I don't see instructions in the brain.
>
> They are distributed. You can emulate a neuronal net with a computer,
> and when he learn, the information/instruction get distributed in a
> non explicit way in the sensibility of each neurons.

The effect of the instructions, their consequences are distributed,
but the instructions themselves do not exist. If I read Chinese, then
a Chinese book has information for me. If I can't read it, it still
has information for my eyes. If I'm blind, it has no information.
There is no such thing as information or instruction outside of our
perceptual interpretation of it, which is rooted in out existential
identities.

>
>
>
> >>> Even in zoology, phenomena
> >>> like camouflage suggest that emulation is only 'skin deep'. If deep
> >>> emulation were possible, I think you would have organisms which
> >>> evolve
> >>> chameleon powers which fool all predators, not just some. An animal
> >>> that can turn into a stone would be far superior to one which can
> >>> imagine funny stories.
>
> >> It depends of the context.
>
> > Not necessarily.
>
> Well, an octopus can turn into a stone appearance, but is it superior
> to a human comic?
>
Sure it would. Maybe not a stone 'appearance' but to be able to
actually become a stone to all observers I would think would be more
of an advantage than any esoteric fiction. What is going to come after
you if you are a rock?

>
> >>>>> How could it really not
> >>>>> be? If we only can project our perception of a process onto a
> >>>>> machine,
> >>>>> why would the rest of the process that we can't perceive
> >>>>> automatically
> >>>>> arise?
>
> >>>> Why not?
>
> >>> Because we're not putting it in there.
>
> >> We don't need to. The UMs have it at the start, and the LUMs can know
> >> that.
>
> > How do you know they have it? Where does it come from?
>
> Computer science. Arithmetic.

It's circular reasoning to me. You are just defining qualia as
arithmetic from the beginning, and then saying that since computer
science exposes the arithmetic which you imagine has something to do
with private perception, then that proves that the qualia must come
with the arithmetic. It's as if you are saying that life is nothing
but reproduction therefore anything that reproduces (copy machines,
financial transactions, etc) is alive.

We aren't seeing that pan out though, and I think I know exactly why
but you won't seriously consider it. You've just made the theory
unfalsifiable by disqualifying any significant difference between
fantasy and reality, living and non-living, mind and body.

>
>
>
> >>> It's like if you have only a
> >>> way to detect sugar and water, your version of imitation orange
> >>> juice
> >>> would be the same as your imitation grape juice, just sugar water.
>
> >> That is a poor analogy, which again fails to notice the richness of
> >> machine's inner life (the one they can talk about partially, like
> >> us).
>
> > There is no way to tell that a machine's inner life is not just our
> > outer mechanics.
>
> There is no way to tell that a Craig's inner life is not just my
> outer mechanics.

Not for you, no. Yet you claim to know that my inner life can be
reproduced by comp.

>
>
>
> >>>>>>> I think that you are jumping to the conclusion that simulation
> >>>>>>> does
> >>>>>>> not require an interpreter which is anchored in matter.
>
> >>>>>> That follows from the UDA-step-8. If my own emulation requires a
> >>>>>> material digital machine, then it does not require a material
> >>>>>> machine.
>
> >>>>> Not to produce the 3-p simulacra of you, no, but to produce your
> >>>>> genuine 1-p emulation, it would require the same material
> >>>>> machine as
> >>>>> you do.
>
> >>>> Why?
>
> >>> Because the interior of that material is the subject which is
> >>> experiencing the 1-p phenomena.
>
> >> define "interior of material", in a way we can understand (not in a
> >> sequence of complex words we despair to have intelligible
> >> definitions).
>
> > Interior of material is straightforward. You view the world from
> > inside of your head, or body, or house. So does everything else.
>
> That's projective geometry. Interesting, but too poor for the rich 1p
> phenomenon.
>
It's not a sealed container, it's just a vantage point. We see
*through* the inside of our brain and eyes, not just looking at the
inside of our brain but out into the world. The inside of our brain
sees the outsides of our body's world through the inside of our eyes.
It's richer than 3p because it is bringing all of history compressed
as interpretive projections and pattern recognitions specific to that
1p subjective reality tunnel.

>
>
> >>>>> A material digital machine would not suffice because the
> >>>>> material which the machine is being executed digitally on already
> >>>>> has
> >>>>> it's own (servile and somnambulant compared to organic chemistry)
> >>>>> genuine 1-p experience.
>
> >>>> So our consciousness is the consciousness of our basic elements.
>
> >>> No, not at all. It is the conscious synthesis of the consciousness
> >>> of
> >>> our basic elements.
>
> >> This makes only both consciousness and matter mysterious in an ad hoc
> >> way. That is not enough to refute a competing theory.
>
> > It doesn't make anything mysterious to me.
>
> That might be your problem. You might study books on the mind-body
> problem.
> Read papers, and submit solution to problems. Ot make your theory
> precise enough to submit new questions.

That might be possible, yeah, although you have to understand that to
me it's like telling me 'you should read what the alchemists are
saying and submit your solutions to producing lapis philosophorum, or
make your theory precise enough so that they can figure out if it's
any good for turning lead into gold'.

>
>
>
> >>>> This
> >>>> explains nothing. Neither consciousness nor matter. It leads to an
> >>>> open infinite regress, which needs infinities to overcome all
> >>>> possible
> >>>> machines.
>
> >>> I think it explains everything.
>
> >> Explains just one thing, just to see.
>
> >>> I don't see any infinities at all.
>
> >> Then we are Turing emulable.
>
> > There can't be finite non-comps?
>
> That is ambiguous. The word "finite" is as much tricky than
> "infinite". I have aslked for an example, and you gave me "yellow",
> but you did not succeed in showing what is non computable there.
> (Except the qualia itself, but this is already the case for machines,
> and actually, is not a finite things).

Yellow is the qualia itself (what else would it be?) Why isn't the
qualia of yellow finite? It comes between orange and green in the
visible spectrum. It's perceived as a product of red and green light,
etc. It's a primary color in ink. What isn't finite about it?

>
>
>
> >>>>>> Matter is what glue the machine dreams,
>
> >>>>> I think that it is obviously not. If we were machines and that
> >>>>> were
> >>>>> true, then we should come out of the womb filled with intuitions
> >>>>> about
> >>>>> electronics, chemistry, and mathematics, not ghosts and space
> >>>>> monsters. Dreams are not material, they are living subjective
> >>>>> feelings. Matter is what is too boring and repetitive to be
> >>>>> dreamed
> >>>>> of. Too tiny and too vast, too hot and cold, dense and ephemeral
> >>>>> for
> >>>>> dreams. Dream bullets don't make much of an impact. Dream
> >>>>> injuries
> >>>>> don't have to heal.
>
> >>>> You beg the question.
>
> >>> I don't see how.
>
> >> Because you say that dream bullet does not do injuries, but comp
> >> explains that virtual bullet can injured a virtual observer.
>
> > But that doesn't play out experimentally. In a dream virtual bullets
> > can have ambiguous effects, no effects, instant healing, etc.
>
> Not if the virtual reality operated below my subst level. The virtual
> bullet will injure me as much as in "reality".

That's not true. It only seems like it should be that way in a purely
theoretical Matrix-like way. There is no dream that can deliver any
pain comparable with an actual serious injury, especially not the
reality of the hundreds of hours it takes to heal. You assume from the
start that reality is insubstantial and then prove it to yourself by
saying that substance isn't real. I would call this Strong Simulation,
and the problem with it is that it makes it too hard for the truth to
get through. Were it not imposed programmatically from the outside,
there is no access for sense to take place. There is no common sense
or ground wire to orient the connectivity of things.

>
>
>
> >> So as an
> >> argument, you are just saying -that we are not virtual, without
> >> explanations.
>
> > No, I'm saying that we are not only virtual, we are actual as well.
> > The explanation is that we can conceptualize a difference between
> > dream and reality - regardless of the veracity of that difference.
> > Determinism and comp would have no use for a concept of non-
> > simulation.
>
> I have to go. I might comment later the rest of your post. But it
> might be my last comments to you, given that I am not particularly
> interested in any non comp theory, and a bit bored by your systematic
> way to elude arguments, mainly by referring to personal opinion. I
> respect non-comp believers, but I do have a problem with invalid
> arguments, and/or much too much fuzzy prose, and, especially, your
> unwillingness to ameliorate it. I hope you don't mind my frankness,
>
> Bruno

I'm ok with that. We've probably said about all there is to say on the
subject, and these posts are getting incredibly lengthy.

Thanks,
Craig

Bruno Marchal

unread,
Sep 25, 2011, 10:33:15 AM9/25/11
to everyth...@googlegroups.com
On 25 Sep 2011, at 02:51, Craig Weinberg wrote:

(next installment)

On Sep 23, 3:17 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:

On 23 Sep 2011, at 02:42, Craig Weinberg wrote:


It is a comparison made by a third
person observer of a human presentation against their expectations of
said human presentation. Substitution 'level' similarly implies that
there is an objective standard for expectations of humanity. I don't
think that there is such a thing.

It all depend what you mean by "objective standard".

That there is some kind of actual set of criteria which make the
difference between human and non human.

You really frighten me. The last time I read something similar was when I read My Kampf by A. Hitler, notably on the handicapped people, homosexual, jews, etc.
I don't think there is any criteria. may be a human is just someone having human parents, but then evolution already blur the notion. 




Some apes probably have more
human qualities than some humans,

That's a good point. Animals might be more human than human, like in humanism.




and some artificial brain extensions
will probably have more human qualities than others. I don't think
that it's likely to be able to replace a significant part of the brain
with digital appliances though.

We know this, but I think this comes from not having a genuine idea of what are universal machines.


I would compare it with body
prosthetics. An organ here or a limb there, sure, but replacing a
spinal cord or brain stem with something non-biological is probably
not going to work.

The point is that it is possible in principle. No doubt that the brain is a very complex structure, and it might take a long time before we do it. But the fact that we will have to take centuries, or billions of years, or infinity changes nothing. Comp is not the idea that we can or will have artificial brains, but just that the brain is a natural machine. This is my working assumption, and, given that you say it is false/wrong/unplausible, we just ask you why you think so.



The technical notion of zombie does not rely on comp. It is just a
human, acting normally, but which is assumed to be without any inner
life. Non-comp + current data makes them plausible, which is an
argument for comp.

I think that the whole premise is too flawed to be useful in practical
considerations.

The practical thing that you can extract from comp is the infinite spiritual power of *modesty*. 



It posits that there is a such thing as 'acting
normally'. The existence of sociopathy indicates that there are
naturally occurring 'partial zombies' already to the extent that it
means anything, but that the concept or p-zombies itself assumes that
human 'normalcy' can be ascertained by observing moment to moment
behavior rather than over a lifetime. A fully digital person, like
digital music, may satisfy many of the qualities we associate with a
person, but always carry with them a clear inauthenticity which seems
aimless and empty. If they are simulating a person who is already like
that, then it could be said that they have achieved substitution
level, but it's not really a robust  test of the principle.

Comp justifies by itself that there are no test. 




If you don't assume
that substance can be separated from function completely, then there
is no meaning to the idea of zombies. It's like dehydrated water.

I am rather skeptical on substance. But I tend to believe in waves and
particles, because group theory can explain them. But I don't need
substance for that. And with comp, ther is no substance that we can
related, even indirectly, to consciousness. I see the notion of
substance as the Achilles' heel of the Aristotelian theories.

But if you are saying that zombies cannot exist,

I am not saying that. I am saying that non-comp + materialism entails bizarre infinities and/or zombies.


doesn't that mean
that positing a substance that automatically is associated with a
particular set of functions. Otherwise you could just program
something to behave like a zombie.

?
To program something acting like a zombie is the same as programming something to behave like a human.



To say that comp prevents zombies is actually a self-defeating
argument I think.

I might differ a little bit from Stathis on this. It is not clear that comp prevents zombies.
Japanese engineers build quite sophisticate dolls for sexual purpose, and we might be in sincere trouble the day they fight for being recognize as living and conscious beings, even if we know they are gifted in simulating (not emulating) feelings.
A friend of mine (engineers) to build a little cute dog-robot to feature in one of his piece of theater; The dog was tortured on the scene and crying/whining is a very convincing way. The audience was in shock. It was just almost impossible to not confer to the robot some feeling, despite the extreme simplicity of its program. Fake humans capable of deluding people for some great amount of time cannot be entirely excluded. What I do say, is that non comp entails "real" p-zombies (and/or other more technical absurdities).



It seems to violate the principle of universal
emulation so that you could not, for instance have one digital person
which was the virtualized slave of another, because the second digital
body would be, in effect, a zombie. This seems to inject a special
case of arbitrary Turing limitation. Consider the example of remote
desktop software, where we can shell out one computer to another. What
happens to the host computer's 'consciousness'?  Does it not become a
partial zombie, unable to locally control it's behavior?

In *that* sense, all bodies are zombies. A body is always a construct of minds. This is not obvious, and is related to the fact that comp makes physicalness emerging from consciousness, which emerges from the infinities of number relations.





There is no
substitution 'level', only a ratios of authenticity.

?

Say a plastic plant is 100% authentic at a distance of 50 years by the
naked eye, but 20% likely to be deemed authentic at a distance of
three feet. Some people have better eyesight. Some are more familiar
with plants. Some are paying more attention. Some are on
hallucinogens, etc There is no substitution level at which a plastic
plant functions 100% as a plant from all possible observers through
all time and not actually be a genuine plant. Substitution as a
concrete existential reality is a myth. It's just a question of
arbitrarily fixing an acceptable set of perceptions by particular
categories of observers and taking it for functional equivalence.

An entity suffering in a box, does suffer, independently of any
observers.

A box can contain a body, but it's not clear that it can contain the
experience with that body. Sensory isolation in humans leads to rapid
escape into the imagining psyche. But if you want to stick with a flat
read of the example, we could say that the box is an observer, at
least to the extent that it's existence must resist the escape of the
trapped entity.

My point is that we don't need to observe a brain for a consciousness existing in relation with that brain.






The closer your
substitute is to native human sensemaking material, the more of the
brain can be replaced with it, but with diminishing returns at high
levels so that complete replacement would not be desirable.

That is even worst. This entails partial zombies. It does not make
sense. I remind you that zombie, by definition, cannot be seen as
such
by their behavior at all.

That's the theoretical description of a zombie. Like dehydrated water.
In reality, one observer's zombie is another observer's non-descript
stranger in the park. There is no validity to these observations
relative to the would-be zombie's quality of subjectivity.

We are always talking in a theory. Reality is what we search.

My point was that zombies as they are described cannot exist, and that
the real world principle the device of zombiehood is intended to
defeat is not even addressed.

?


We better use the contemporary image to help people see the validity
of argument, but I could reason with clock-wheels like machine. The
key point is the mathematical notion of universality (for computation).

It would be more interesting to use the clock-wheel version so that
people could see the invalidity of the argument ;)

That is like Ned Block type of argument. If consciousness supervene on a brain in virtue of being a computer, then it must supervene on any bizarre implementation of that brain (like the chinese people sending 0 and 1 to each others). This just prove nothing.







Everyone agree that if the level is infinitely low, then current
physics is false. To speculate that physics is false for making
machine stupid is a bit of far stretching.

Physics isn't false, it's just incomplete.

No, it has to false for making the substitution level infinitely low.
*ALL* theories, including the many one trying to marry gravitation
and
the quantum entails its Turing emulability.

The substitution level isn't infinitely low, it's just not applicable
at all. There is no substitution level of white for black, lead for
gold, up for down, etc. I doubt the objective existence of
'substitution'. Substitution is an interpretation - not necessarily a
voluntary one, but an interpretation nonetheless.

So, what will you say if your daughter accept an artificial brain?
Substitution is an operational term, like castration, lobotomy, etc.

I would say she would be committing suicide, unless the technology had
already been tested with people being gradually offloaded and reloaded
into their own brains to verify the retention of consciousness.

We can verify this. But of course, it is more easy to say "yes" to the doctor, once Uncle Paul did it, and *seems* satisfy with it. But this is contingent, and might just be concerned with the skill of the doctor. I wouldn't  say "yes" to a doctor who is drunk all the time.



Honestly I don't think it's going to come to that. I think the
limitations of mechanism to generate human sentience will be revealed
experimentally long before anyone considers replacing an entire brain.

Perhaps, but this is not relevant for the comp hyp.







A good Eurocentric map of
the world before the Age of Discovery isn't false, just not
applicable
to the other hemisphere.

The analogy fails to address the point I made.

If the point you made is that physics has to be false if the human
psyche has no substitution level, then my analogy is that a map of
known territory (physics) doesn't have to be false just because it
doesn't apply to an unknown territory (psyche).

Physics is not false. But physicalism, or weak materialism,  is
incompatible with mechanism.

I don't so much get into philosophical conventions.  Mechanism,
physicalism, materialism etc, are just splinters of a single faith to
me. They are all rooted in 1-p supervenience ontological assumptions,
which I don't use. My view is not dualism because 1-p subjectivity is
neither substance nor-non substance, but rather it is the perceptual
experience of the sensorimotive energy within and between substance.
It cannot be conceived of properly as an object or noun. Terms like
'soul' or 'consciousness' are necessary for us to represent it
linguistically, but the actual referent is a verb. It is to feel,
experience, and do.

This is very vague, and I don't see any possible interpretation of this making problem for the comp. hyp.
Define "view of substance". 




2) It is true that computation needs much less than psyche, indeed, it
does not need psyche, but that is why comp is a real explanation: it
explains the existence of psyche (what the machine thinks about)
without assuming psyche.
You say comp is false, because you believe that we can explain psyche
only by assuming psyche. What you say is "psyche cannot be explained
(without psyche).

That psyche cannot be explained is only one factor, and not the most
important one, which leads me in the direction of comp being false.
Some others are:

1) I am compelled by the symmetry and cohesiveness of a Sensorimotive-
Electromagnetic Perceptual Relativity rooted in matter, space, and
entropy as the involuted consequence of energy, time, and
significance.

So you assume some physicalness. We are progressing.



2) I am compelled by our naive perception of being 'inside of' our
physical heads and bodies, rather than inside of a disembodied logical
process - even with a simulation hypothesis, our ability to experience
varying degrees of sanity and dissociation rather than a real world
which is indistinguishable from a structured dream.

OK. Thanks for admitting this is naive.



3) I am compelled by the transparency of our proprioceptive
engagement. Even though our perception can be shown to have a
substitution level, our ordinary experience is quite reliable at
informing the psyche of it's material niche. We don't usually
experience dropouts or pixelation, continuity errors, etc. It's not
perfect, but our ability to communicate with each other across many
different logical encodings and substances without any other entity
interfering is a testament to the specificity of human consciousness
to the precise fusion of physical neurology and psychic unity.

That's right, but I don't see why that would be false with comp, unless we prove that the comp physics is different from the inferred physics.



4) All of the aesthetic hints bound up in our fictions of the unlive
and the undead, as well as the stereotypes of cold, empty mechanism.
Consistent themes in science fiction and fantasy. Again suggesting a
mind-body pseudo-duality rather than an arithmetic monism.

But arithmetic does explain phenomenological dualities (indeed octalities).



5) The clues in human development, with childhood seeing innate
grounding in tangible sensorimotive participation rather than
universal, self-driven sui generis mathematical facility. It takes
years for us to develop to the point where we can be taught counting,
addition, and multiplication.

And? Don't confuse the human conception of numbers, and the numbers as a the intended matter subject of human studies. 


6) The lack of real world arithmetic phenomena independent of matter.

That is like Roger Granet's argument that if 17 exists, it has to exists somewhere. But this beg the question. It posits at the start that existence is physical existence.


I think that arithmetic seems like an independent epistemology because
it is a distillation of the kind of orders and symmetries which we
share with matter, and as such is both distant from the most
subjective experiences of the psyche, but also non-corporeal since it
is in fact a sensorimotive projection. It's basically a dimension of
literal sense we can experience with matter, but it is still just one
channel of sense which will not automatically give rise to non-
theoretical phenomenologies.




98% of the scientist are wrong on the consequence of comp. They use
it
as a materialist solution of the mind-body problem. You are not
helping by taking their interpretation of comp as granted, and
talking
like if you were sure that comp is false. Why not trying to get the
correct understanding of comp before criticizing it?

If 98% of scientists who study comp are wrong about it's consequences,
what chance would I have of beating them at their own game? It's not
that I know comp is false, it's that I have a different hypothesis
which recontextualizes the relation of comp to non-comp and makes more
sense to me than comp (or my misunderstanding of comp).

It is your right. I just do not follow your argument against comp.

Which one?

All.


You might as well use UDA to say that comp implies non-materialism,
and I postulate matter, so I postulate non-comp.
The problem, for me, is that such a move prevents the search for an
explanation of matter, and mind.

I don't understand what is the appeal of making arithmetic unexplained
and primitive instead.

We have no choice in the matter. This has been proved. You could as well say that you don't see the appeal to make the square of 2 irrational. 







I have confidence in the relation between
comp and non-comp. That is the invariance, the reality, and a theory
of Common Sense.

comp gives a crucial role to no-comp.

Meaning that it is a good host to us as guests in it's universe. I
don't think that's the case. This universe is home to us all and we
are all guests as well (guests of our Common Sense)

?

It makes us strangers in an arithmetic universe.


Not necessarily.
Where? It starts from there. And it ends on it. Only the physical reality appears to be something arising from a deeper and simpler reality.





It needs
fluids - water, cells.

Clothes.

Would you say that H2O is merely the clothes of water, and that water
could therefore exist in a 'dehydrated' form?

Sure. I do this in dreams. Virtual water gives virtual feeling of
wetness with great accuracies.

Virtual water doesn't do all of the things that real water does
though. It's just a dynamic image and maybe some tactile sense. It
doesn't have to boil or evaporate, doesn't quench thirst, etc.

I am used to do coffee and tea in dreams. 


I agree
that some of our sense of water is reproduced locally in the psyche,
but it is clearly a facade of H2O.

But that is enough. Actually we do live with that facade, even when we drink real water, once you accept that the brain is a representational machine. The "H2O" is not the subject of the substitution. The human is.






Something that lives and dies and makes a mess.

Universal machine are quite alive, and indeed put the big mess in
platonia.

What qualities of UMs make them alive?

The fact that they are creative, reproduce, transform themselves, are
attracted by God, sometime repulsed by God also, and that they can
diagonalize against all normative theories made about them. And many
more things.


It sounds worthwhile but I would need to see some demos and
experiments dumbed down for laymen to have an opinion.

I did it, from the implementation of the modal logic in my long version thesis. But the "dumbing" makes it non convincing, unless you understand the program, and for this you need to dig harder on mathematical logic. 
Science. In science one (valid) proof is enough, like one (genuine) counter-example is enough.



You just said all of
this great stuff that UM can do which is just like us, and then your
one example of this is you and me ourselves?

I was addressing different points.



What is the point of
saying that we are like ourselves and what would that have to do with
supporting mechanism?

I was illustrating that a theory like mechanism does not have to eliminate the person, which is what happen when materialist defend mechanism.








How does the brain understand these things if it has no access to
the
papers?

Comp explains exactly how things like papers emerge from the
computation. The explanation is already close to Feynman formulation
of QM.

Unfortunately this sounds to me like "Read the bible and your
questions will be answered."

Read sane04.http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHALAbstract...


I have. I like it but I can only get so far and I like my own ideas
better.

Then I encourage you to make them *much* clearer.




But you don't seem serious in "arguing" against comp, and admitting
you don't know anything in computer science.

Oh I freely admit that I don't know anything in computer science. My
whole point is that computer science only relates to half of
reality.

I don't know anything about X. My whole point is that X only do this.
But if you admit knowing nothing about X, how can you derive anything
about X.
You are just confessing your prejudice.

I don't know anything about ship building but I know that it only
concerns seafaring and not aerospace. I think that being a master
shipwright could very well present a significant obstacle to
pioneering in aerospace.

That's not an argument. At the most a hint for low level substitution.

It's an example of why arguments from authority to not compel me in
this area.

They does not compel me in any theoretical area. 





I'm not trying to make the universe fit into a computer science
theory. I only argue against comp because it's what is distracting
you
from seeing the bigger picture.

I show, in short that comp leads to Plotinus. If that is not a big
picture!
Comp explains conceptually, and technically, the three Gods of the
greek, the apparition of LUMs and their science and theologies, the
difference between qualia and quanta, sensation and perception,
perception and observation.

I believe you but i get to those things without vaporizing substance.

Which means you are affectively attached to the bullet of Aristotle.
Substance is an enigma. Something we have to explain, ontologically,
or epistemologically.

I think that I have explained substance. It is the opposite of
perception over time. Perceptual obstacles across space. Together they
form an involuted pseudo-dualism.

?






You just criticize a theory that you admit knowing nothing about.
This
is a bit weird.

My purpose is not to criticize the empire of comp, it is to point to
the virgin continent of sense.

So you should love comp, because it points on the vast domain of
machine's sense and realities.

I do love it in theory. It's a whole new frontier to explore. It's
just not the one I'm interested or qualified to explore.

But then stay neutral on it.
No it is math. The logic of self-reference is a branch of math.





The 3-p
view of schematizing the belief of a thing is a second order
conception to the 1-p primitive of what it is to feel that one *is*.

Well, not in the classical theory of beliefs and knowledge.

It's an experience with particular features - a sensory input and a
motive participation. Without that foundation, there is nothing to
model.

That's unclear. The "p" in Bp & p might play that role, as I thought
you grasped above.

You can't start with doubting the self, because logically that would
invalidate the doubt and fall into infinite regress. It's not even
possible to consider because the Ur-arithmetic would have nothing to
experience it.

The 1-self is not doubtable, OK. The 3-self is.






Give me one example, one common sense metaphor,
one graphed function that could suggest to me that there is any
belief, feeling, or awareness.going on.

The fact that the universal machine remains silent on the deep
question

What deep question?

'are you consistent?", "do you believe in a reality", "do you
believe
in after life", etc.

Have you considered that it's silent because it's not equipped to
answer the question?

yes, but it does not work. The machine cannot answer the question for
deeper reason, that she can find and expose.
For example the machine remains silent in the question "are you
consistent", but later she can say that "If I am consistent, then I
will remain silent on the question of my consistence".

Meh. It sounds like asking a spirit 'if you can't hear me, do NOT
knock three times'

No. It is more like if you ask a spirit a too intimate question. On
another question, he does knock three times, and then he can explain
why he did not knocked it earlier.

It makes no sense that all machine spirits would be so touchy about
their religious beliefs.

All ideally correct one, which can be studied in math.



Would that mean that any person being
digitally simulated by a UM would also be unable to answer
philosophical questions?

Alas no. Humans are typically not ideally correct. But each time a human presents assertively a true but non communicable proposition catastophes occur.
I infer something like that, from reading human history. Especially if we mix artificial intelligence and weapons. We might construct a bomb, which might develop paranoia, yet be very clever to prevents its own dismantling. We might be complled to leave this planet for that reason alone. Better never to mix creative force and destructive force, even if it look nice for preventing death in war ... in the short run. An intelligent missile is almost a living contradiction.
The difference between science and religion is an artificial invention by politicians to manipulate people.
Sure, but the evidence are that brain are concerned with subset of realities, already.




I say that
substitution level does not apply. I think that to prove substitution
level exists

Comp implies that no one can prove it exists. No machine can know for
sure its substitution level, even after a successful teleportation.
She might believe that she has 100% survived but suffer from
anosognosia.

I can understand what you are saying, and I agree that it is a good
way of modeling why a self-referencing entity would not be able to get
behind itself, but it seems like a contradiction. If you say that we
are machines, then you are saying that we cannot know for sure our
substitution level, which is exactly what you are criticizing me on.

You confuse indeterminate level, and infinitely low level.



If a machine cannot know for sure their subst level, does it know for
sure that the level is not infinite?

No. Comp justifies this precisely. Comp is a theological principle. Comp ethics is that you have the right to say "no" to the doctor. We cannot force comp to someone else.



If not, then comp itself is not
Turing emulable?

Comp is not a person. But I get your idea. Comp, like numbers needs to be postulated, and cannot be derived from any theory not based on some act of faith. It *is* a theology. It is a scientific theology, and this means only that it is doubtable. If some one say that comp is the truth, then, just according to comp: he is lying. It is a possibility, like any scientific theory. Parctical comp, if that appears, can only be a private concern, between you, your doctor, and perhaps your favorite shaman.
One of my goal is yo understand what are those masses, time and energies.






Comp isn't false, it just doesn't recognize
the contribution of the non-comp substrate of computation,

It does. I insist a lot on this. Comp is almost the needed philosophy
for curing the idea that everything is computable.
Please study the theory before emitting false speculation on it.

So you are saying that comp supervenes on or is equally fundamental as
non-comp?

Arithmetical truth can be partitioned into level of complexity,
sigma_1, sigma_2, sigma_3, etc...
The computable is sigma_0 and sigma_1. Above it is uncomputable. Most
meta-properties on the sigma_1 are above sigma_1.
The numbers relations escapes the computable, and to make a theory of
the computable, we cannot avoid excursions in the non computable. We
can always prove that a machine stop without leaving the sigma_1, but
to prove just that some machine will not stop, is a quite another
matter.

Does comp explain why sigma_2 becomes uncomputable, and what
computability actually is?

You don't need comp. Computer science is enough.
Computability admits many theories, but with Church thesis they are equivalent. (Turing's machines, Church's lambda-terms, Post's production systems, Markov's algorithm, Deustch's quantum computer, Kitaev-Freedman toplogical modular functors, Diophantine polynomials, etc.) 





so it's not
applicable for describing certain kinds of consciousness where non-
comp is more developed.

Consciousness and matter are shown by comp to be highly non
computable. So much that the mind-body problem is transformed into a
problem of justifying why the laws of physics seems to be computable.

I think they not only seem to be computable but they are computable,
and that this is due to how sensorimotive orientation works.

Hmm... Then you can compute if you will see a photon in the up state
starting from the superposition (up + down)?

No, a photon (if it existed, which I don't think it does) is
completely outside of our perceptual inertial frame. Which is why it
seems to do unusual things because we are seeing it secondhand though
photomultipliers or other equipment which cannot report to us anything
which cannot be represented in the very limited common sense we share
with glass and steel. Within our naive perceptual niche, our laws of
physics are computable.

Replace photon by electron, if you want. de Broglie got the Nobel prize for arguing convincingly that massive matter is as weird than the non massive light.





It's not
just a solipsistic simulation, it's a trans-solipsistic localization.

You mean a first plural localization? Those are not computable,
assuming either comp or QM.

Not necessarily plural, just that, for example, when I look at the sun
with my eye, there is a sun sense localization taking place within the
eyeball, retina, visual cortex, and perceiver. They are all different
materials and aspects of materials but they are all imitating, to the
extent that they can, the meaning of the 3-p event they observe. I
realize there is a certain sequence and logic to all of this from 3-p
which looks like chain reaction on microcosmic analysis, but from 1-p
it is a synchronized gestalt which is local but also concretely
entangles the many systems.

to be continued...

Stathis Papaioannou

unread,
Sep 25, 2011, 7:39:04 PM9/25/11
to everyth...@googlegroups.com
On Sat, Sep 24, 2011 at 5:24 AM, Craig Weinberg <whats...@gmail.com> wrote:
>> Do you agree or don't you that the observable (or public, or third
>> person) behaviour of neurons can be entirely explained in terms of a
>> chain of physical events?
>
> No, nothing can be *entirely* explained in terms of a chain of
> physical events in the way that you assume physical events occur.
> Physical events are a shared experiences, dependent upon the
> perceptual capabilities and choices of the participants in them. That
> is not to say we that the behavior of neurons can't be *adequately*
> explained for specific purposes: medical, biochemical,
> electromagnetic, etc.

OK, so you agree that the *observable* behaviour of neurons can be
adequately explained in terms of a chain of physical events. The
neurons won't do anything that is apparently magical, right?

>> At times you have said that thoughts, over
>> and above physical events, have an influence on neuronal behaviour.
>> For an observer (who has no access to whatever subjectivity the
>> neurons may have) that would mean that neurons sometimes fire
>> apparently without any trigger, since if thoughts are the trigger this
>> is not observable.
>
> No. Thoughts are not the trigger of physical events, they are the
> experiential correlate of the physical events. It is the sense that
> the two phenomenologies make together that is the trigger.
>
>> If, on the other hand, neurons do not fire in the
>> absence of physical stimuli (which may have associated with them
>> subjectivity - the observer cannot know this)
>
> We know that for example, gambling affects the physical behavior of
> the amygdala. What physical force do you posit that emanates from
> 'gambling' that penetrates the skull and blood brain barrier to
> mobilize those neurons?

The skull has various holes in it (the foramen magnum, the orbits,
foramina for the cranial nerves) through which sense data from the
environment enters and, via a series of neural relays, reaches the
amygdala and other parts of the brain.

>> But if thoughts influence behaviour and thoughts are not observed,
>> then observation of a brain would show things happening contrary to
>> physical laws,
>
> No. Thought are not observed by an MRI. An MRI can only show the
> physical shadow of the experiences taking place.

That's right, so everything that can be observed in the brain (or in
the body in general) has an observable cause.

>>such as neurons apparently firing for no reason, i.e.
>> magically. You haven't clearly refuted this, perhaps because you can
>> see it would result in a mechanistic brain.
>
> No, I have refuted it over and over and over and over and over. You
> aren't listening to me, you are stuck in your own cognitive loop.
> Please don't accuse me of this again until you have a better
> understanding of what I mean what I'm saying about the relationship
> between gambling and the amygdala.
>
> "We cannot solve our problems with the same thinking we used when we
> created them" - A. Einstein.

You have not answered it. You have contradicted yourself by saying we
*don't* observe the brain doing things contrary to physics and we *do*
observe the brain doing things contrary to physics. You seem to
believe that neurons in the amygdala will fire spontaneously when the
subject thinks about gambling, which would be magic. Neurons only fire
in response to a physical stimulus. That the physical stimulus has
associated qualia is not observable: a scientist would see the neuron
firing, explain why it fired in physical terms, and then wonder as an
afterthought if the neuron "felt" anything while it was firing.

>> A neuron has a limited number of duties: to fire if it sees a certain
>> potential difference across its cell membrane or a certain
>> concentration of neurotransmitter.
>
> That is a gross reductionist mispresentation of neurology. You are
> giving the brain less functionality than mold. Tell me, how does this
> conversation turn into cell membrane potentials or neurotransmitters?

Clearly, it does, since this conversation occurs when the neurons in
our brains are active. The important functionality of the neurons is
the action potential, since that triggers other neurons and ultimately
muscle. The complex cellular apparatus in the neuron is there to allow
this process to happen, as the complex cellular apparatus in the
thyroid is to enable secretion of thyroxine. An artificial thyroid
that measured TSH levels and secreted thyroxine accordingly could
replace the thyroid gland even though it was nothing like the original
organ in structure.

>>That's all that has to be
>> simulated. A neuron doesn't have one response for when, say, the
>> central bank raises interest rates and another response for when it
>> lowers interest rates; all it does is respond to what its neighbours
>> in the network are doing, and because of the complexity of the
>> network, a small change in input can cause a large change in overall
>> brain behaviour.
>
> So if I move my arm, that's because the neurons that have nothing to
> do with my arm must have caused the ones that do relate to my arm to
> fire? And 'I' think that I move 'my arm' because why exactly?

The neurons are connected in a network. If I see something relating to
the economy that may lead me to move my arm to make an online bank
account transaction. Obviously there has to be some causal connection
between my arm and the information about the economy. How do you
imagine that it happens?

> If the brain of even a flea were anywhere remotely close to the
> simplistic goofiness that you describe, we should have figured out
> human consciousness completely 200 years ago.

Even the brain of a flea is very complex. The brain of the nematode C
elegans is the simplest brain we know, and although we have the
anatomy of its neurons and their connections, no adequate computer
simulation exists because we do not know the strength of the
connections.

>> In theory we can simulate something perfectly if its behaviour is
>> computable, in practice we can't but we try to simulate it
>> sufficiently accurately. The brain has a level of engineering
>> tolerance, or you would experience radical mental state changes every
>> time you shook your head. So the simulation doesn't have to get it
>> exactly right down to the quantum level.
>
> Why would you experience a 'radical' mental state change? Why not just
> an appropriate mental state change? Likewise your simulation will
> experience an appropriate mental state to what is being used
> materially to simulate it.

There is a certain level of tolerance in every physical object we
might want to simulate. We need to know a lot about it, but we don't
need accuracy down to the position of every atom, for if the brain
were so delicately balanced it would malfunction with the slightest
perturbation.

>> My point was that even a simulation of a very simple nervous system
>> produces such a fantastic degree of complexity that it is impossible
>> to know what it will do until you actually run the program. It is,
>> like the weather, unpredictable and surprising even though it is
>> deterministic.
>
> There is still no link between predictability and intentionality. You
> might be able to predict what I'm going to order from a menu at a
> restaurant, but that doesn't mean that I'm not choosing it. You might
> not be able to predict a tsunami, but that doesn't mean it's because
> the tsunami is choosing to do something. The difference, I think, has
> to do with more experiential depth in between each input and output.

Whether something is conscious or not has nothing to do with whether
it is deterministic or predictable.

>> > I understand perfectly why you think this argument works, but you
>> > seemingly don't understand that my explanations and examples refute
>> > your false dichotomy. Just as a rule of thumb, anytime someone says
>> > something like "The only way out of this (meaning their) conclusion "
>> > My assessment is that their mind is frozen in a defensive state and
>> > cannot accept new information.
>>
>> You have agreed (sort of) that partial zombies are absurd
>
> No. Stuffed animals are partial zombies to young children. It's a
> linguistic failure to describe reality truthfully, not an insight into
> the truth of consciousness.

This statement shows that you haven't understood what a partial zombie
is. It is a conscious being which lacks consciousness in a particular
modality, such as visual perception or language processing, but does
not notice that anything is abnormal and presents no external evidence
that anything is abnormal. You have said a few posts back that you
think this is absurd: when you're conscious, you know you're
conscious.

>>and you have
>> agreed (sort of) that the brain does not do things contrary to
>> physics. But if consciousness is substrate-dependent, it would allow
>> for the creation of partial zombies. This is a logical problem. You
>> have not explained how to avoid it.
>
> Consciousness is not substrate-dependent, it is substrate descriptive.
> A partial zombie is just a misunderstanding of prognosia. A character
> in a computer game is a partial zombie.

A character in a computer game is not a partial zombie as defined
above. And what's prognosia? Do you mean agnosia, the inability to
recognise certain types of objects? That is not a partial zombie
either, since it affects behaviour and the patient is often aware of
the deficit.

>> Would it count as "internal motives" if the circuit were partly
>> controlled by thermal noise, which in most circuits we try to
>> eliminate? If the circuit were partly controlled by noise it would
>> behave unpredictably (although it would still be following physical
>> laws which could be described probabilistically). A free will
>> incompatibilist could then say that the circuit acted of its own free
>> will. I'm not sure that would satisfy you, but then I don't know what
>> else "internal motives" could mean.
>
> These are the kinds of things that can only be determined through
> experiments. Adding thermal noise could be a first step toward an
> organic-level molecular awareness. If it begins to assemble into
> something like a cell, then you know you have a winner.

What is special about a cell? Is it that it replicates? I don't see
that as having any bearing on intelligence or consciousness. Viruses
replicate and I would say many computer programs are already more
intelligent than viruses.

>> The outcome of the superbowl creates visual and aural input, to which
>> the relevant neurons respond using the same limited repertoire they
>> use in response to every input.
>
> There is no disembodied 'visual and aural input' to which neurons
> respond. Our experience of sound and vision *are* the responses of our
> neurons to their own perceptual niche - cochlear vibration summarized
> through auditory nerve and retinal cellular changes summarized through
> the optic nerve are themselves summarized by the sensory regions of
> the brain.
>
> The outcome of the superbowl creates nothing but meaningless dots on a
> lighted screen. Neurons do all the rest. If you call that a limited
> repertoire, in spite of the fact that every experience of every living
> being is ecapsulated entirely with it, then I wonder what could be
> less limited?

The individual neurons have a very limited repertoire of behaviour,
but the brain's behaviour and experiences are very rich. The richness
comes not from the individual neurons, which are not much different to
any other cell in the body, but from the complexity of the network. If
you devalue the significance of this then why do we need the network?
Why do we need a brain at all - why don't we just have a single neuron
doing all the thinking?

>> Intelligent organisms did as a matter of fact evolve. If they could
>> have evolved without being conscious (as you seem to believe of
>> computers) then why didn't they?
>
> Because the universe is not all about evolution. We perceive some
> phenomena in our universe to be more intelligent than others, as a
> function of what we are. Some phenomena have 'evolved' without much
> consciousness (in our view) - minerals, clouds of gas and vapor, etc.

The question is, why did humans evolve with consciousness rather than
as philosophical zombies? The answer is, because it isn't possible to
make a philosophical zombie since anything that behaves like a human
must be conscious as a side-effect.

>> No, the movie would repeat, since the digits of pi will repeat.
>
> The frames of the movie would not repeat unless you limit the sequence
> of frames to an arbitrary time.

Yes, a movie of arbitrary length will repeat. But consciousness for
the subject is not like a movie, it is more like a frame in a movie. I
am not right now experiencing my whole life, I am experiencing the
thoughts and perceptions of the moment. The rest of my life is only
available to me should I choose to recall a particular memory. Thus
the thoughts I am able to have are limited by my working memory - my
RAM, to use a computer analogy.

>> After
>> a 10 digit sequence there will be at least 1 digit repeating, after a
>> 100 digit sequence there will be at least a digit pair repeating,
>> after a 1000 digit sequence there will be at least a triplet of digits
>> repeating and so on. If you consider one minute sequences of a movie
>> you can calculate how long you would have to wait before a sequence
>> that you had seen before had to appear.
>
> The movie lasts forever though. I don't care if digits or pairs
> repeat, just as a poker player doesn't stop playing poker after he's
> seen what all the cards look like, or seen every winning hand there
> is.

Yes, but the point was that a brain of finite size can only have a
finite number of distinct thoughts.

>> If the Internet is implemented on a finite computer network then there
>> is only a finite amount of information that the network can handle.
>
> Only at one time. Giving an infinite amount of time, there is no limit
> to the amount of 'information' that it can handle.

As I explained:

>> For simplicity, say the Internet network consists of three logic
>> elements. Then the entire Internet could only consist of the
>> information 000, 001, 010, 100, 110, 101, 011 and 111.
>> Another way to
>> look at it is the maximum amount of information that can be packed
>> into a certain volume of space, since you can make computers and
>> brains more efficient by increasing the circuit or neuronal density
>> rather than increasing the size. The upper limit for this is set by
>> the Bekenstein bound (http://en.wikipedia.org/wiki/Bekenstein_bound).
>> Using this you can calculate that the maximum number of distinct
>> physical states the human brain can be in is about 10^10^42; a *huge*
>> number but still a finite number.
>
> The Bekenstein bound assumes only entropy, and not negentropy or
> significance. Conscious entities export significance, so that every
> atom in the cosmos is potentially an extensible part of the human
> psyche. Books. Libraries. DVDs. Microfiche. Nanotech.

Negetropy has a technical definition as the difference between the
entropy of a system and the maximum possible entropy. It has no
bearing on the Bekenstein bound, which is the absolute maximum
information that a volume can contain. It is a hard physical limit,
not disputed by any physicist as far as I am aware. Anyway, it's
pretty greedy of you to be dissatisfied with a number like 10^10^42,
which if it could be utilised would allow one brain to have far more
thoughts than all the humans who have ever lived put together.

>> That's right, we need only consider a substance that can successfully
>> substitute for the limited range of functions we are interested in,
>> whether it be cellular communication or cleaning windows.
>
> Which is why, since we have no idea what the ranges of functions or
> dependencies are contained in the human psyche, we cannot assume that
> watering the plants with any old clear liquid should suffice.

We need to know what the functions are before we can substitute for them.

>> But TV programs can be shown on a TV with an LCD or CRT screen. The
>> technologies are entirely different, even the end result looks
>> slightly different, but for the purposes of watching and enjoying TV
>> shows they are functionally identical.
>
> Ugh. Seriously? You are going to say it in a universe of only black
> and white versus color TVs, it's no big deal if it's in color or not?
> It's like saying that the difference between a loaded pistol blowing
> your brains out and a toy water gun are that one is a bit noisier and
> messier than the other. I made my point, you are grasping for straws.

The function of a black and white TV is different from that of a
colour TV. However, the function of a CRT TV is similar to that of an
LCD TV (both colour) even though the technology is completely
different.

>> Differences such as the weight
>> or volume of the TV exist but are irrelevant when we are discussing
>> watching the picture on the screen, even though weight and volume
>> contribute to functional differences not related to picture quality.

>> Yes, no doubt it would be difficult to go substituting cellular


>> components, but as I have said many times that makes no difference to
>> the functionalist argument, which is that *if* a way could be found to
>> preserve function in a different substrate it would also preserve
>> consciousness.
>
> Of course, the functionalist argument agrees with itself. If there is
> a way to do the impossible, then it is possible.

It's not impossible, there is a qualitative difference between
difficult and impossible. It would be difficult for humans to build a
planet the size of Jupiter, but there is no theoretical reason why it
could not be done. On the other hand, it is impossible to build a
square triangle, since it presents a logical contradiction. There is
no logical contradiction in substituting the function of parts of the
human body. Substituting one thing for another to maintain function is
one of the main tasks to which human intelligence is applied.

>> That's right, since the visual cortex does not develop properly unless
>> it gets the appropriate stimulation. But there's no reason to believe
>> that stimulation via a retina would be better than stimulation from an
>> artificial sensor. The cortical neurons don't connect directly to the
>> rods and cones but via ganglion cells which in turn interface with
>> neurons in the thalamus and midbrain. Moreover, the cortical neurons
>> don't directly know anything about the light hitting the retina: the
>> brain deduces the existence of an object forming an image because
>> there is a mapping from the retina to the visual cortex, but it would
>> deduce the same thing if the cortex were stimulated directly in the
>> same way.
>
> No, it looks like it doesn't work that way:
> http://www.mendeley.com/research/tms-of-the-occipital-cortex-induces-tactile-sensations-in-the-fingers-of-blind-braille-readers/

That is consistent with what I said.

>> It is irrelevant to the discussion whether the feeling of free will is
>> observable from the outside. I don't understand why you say that such
>> a feeling would have "no possible reason to exist or method of arising
>> in a deterministic world". People are deluded about all sorts of
>> things: what reason for existing or method of arising do those
>> delusions have that a non-deterministic free will delusion would lack?
>
> Because free will in a deterministic universe would not even be
> conceivable in the first place to have a delusion about it. Even
> delusional minds can't imagine a square circle or a new primary color.

You're saying that free will in a deterministic world is
contradictory. That may be the case if you define free will in a
particular way (and not everyone defines it that way), but still that
does not imply that the *feeling* of free will is incompatible with
determinism.

>> This is your guess, but if everything has qualia then perhaps a
>> computer running a program could have similar, if not exactly the
>> same, qualia to those of a human.
>
> Sure, and perhaps a trash can that says THANK YOU on it is sincerely
> expressing it's gratitude. With enough ecstasy, it very well might
> seem like it does. Why would that indicate anything about the native
> qualia of the trash can?

That's not an argument. There is no logical or empirical reason to
assume that the qualia of a computer that behaves like you cannot be
very similar to your own. Even if you believe qualia are
substrate-dependent, completely different materials can have the same
physical properties, so why not the same qualia?


--
Stathis Papaioannou

Craig Weinberg

unread,
Sep 25, 2011, 9:09:11 PM9/25/11
to Everything List
On Sep 25, 10:33 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 25 Sep 2011, at 02:51, Craig Weinberg wrote:
>
> > (next installment)
>
> > On Sep 23, 3:17 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
>
> >> On 23 Sep 2011, at 02:42, Craig Weinberg wrote:
>
> >>> It is a comparison made by a third
> >>> person observer of a human presentation against their expectations
> >>> of
> >>> said human presentation. Substitution 'level' similarly implies that
> >>> there is an objective standard for expectations of humanity. I don't
> >>> think that there is such a thing.
>
> >> It all depend what you mean by "objective standard".
>
> > That there is some kind of actual set of criteria which make the
> > difference between human and non human.
>
> You really frighten me. The last time I read something similar was
> when I read My Kampf by A. Hitler, notably on the handicapped people,
> homosexual, jews, etc.

Oof. We've hit the Godwin Law Limit. http://en.wikipedia.org/wiki/Godwin%27s_law

But I'm the one saying that there isn't a substitution level. You've
been the one telling me that there must be. I'm saying that we are
both mechanical and non-mechanical, but you are saying that all non-
mechanical must be reducible to the consequences of mechanism. That
sounds much more like epistemological fascism to me.

> I don't think there is any criteria. may be a human is just someone
> having human parents, but then evolution already blur the notion.
>
> > Some apes probably have more
> > human qualities than some humans,
>
> That's a good point. Animals might be more human than human, like in
> humanism.
>
> > and some artificial brain extensions
> > will probably have more human qualities than others. I don't think
> > that it's likely to be able to replace a significant part of the brain
> > with digital appliances though.
>
> We know this, but I think this comes from not having a genuine idea of
> what are universal machines.
>
> > I would compare it with body
> > prosthetics. An organ here or a limb there, sure, but replacing a
> > spinal cord or brain stem with something non-biological is probably
> > not going to work.
>
> The point is that it is possible in principle. No doubt that the brain
> is a very complex structure, and it might take a long time before we
> do it. But the fact that we will have to take centuries, or billions
> of years, or infinity changes nothing. Comp is not the idea that we
> can or will have artificial brains, but just that the brain is a
> natural machine. This is my working assumption, and, given that you
> say it is false/wrong/unplausible, we just ask you why you think so.

It's wrong because the brain is only the frontier where we begin and
the body ends. What we can observe the brain doing is not an adequate
recipe for creating 1-p consciousness in all possible worlds. It's not
a direct correlation, just as yellow is not a direct result of the
frequency wavelength range of the visible spectrum which it correlates
to. The brain is the common sense of what we are and the psyche is the
uncommon sense of what we are. They overlap through their sharing of
sense, and through their symmetrical divergence from each other, and
they underlap through their separate developmental encounters through
chance.

If you build a brain based upon only the common sense without
factoring in the symmetrical divergence and the underlapping non-
sense, then I think that you get a device with all of the capability
to feel and think as a complicated alarm clock.

>
> >> The technical notion of zombie does not rely on comp. It is just a
> >> human, acting normally, but which is assumed to be without any inner
> >> life. Non-comp + current data makes them plausible, which is an
> >> argument for comp.
>
> > I think that the whole premise is too flawed to be useful in practical
> > considerations.
>
> The practical thing that you can extract from comp is the infinite
> spiritual power of *modesty*.

?

>
> > It posits that there is a such thing as 'acting
> > normally'. The existence of sociopathy indicates that there are
> > naturally occurring 'partial zombies' already to the extent that it
> > means anything, but that the concept or p-zombies itself assumes that
> > human 'normalcy' can be ascertained by observing moment to moment
> > behavior rather than over a lifetime. A fully digital person, like
> > digital music, may satisfy many of the qualities we associate with a
> > person, but always carry with them a clear inauthenticity which seems
> > aimless and empty. If they are simulating a person who is already like
> > that, then it could be said that they have achieved substitution
> > level, but it's not really a robust test of the principle.
>
> Comp justifies by itself that there are no test.

This is the problem. It assumes that just because there is no test to
prove we feel, that in fact feeling is no different from computing.
It's a fair theoretical-philosophical proposition, but it insists upon
a flat computational read of reality to begin with, thus disqualifying
any possibility for 1-p authority.
>
>
>
> >>> If you don't assume
> >>> that substance can be separated from function completely, then there
> >>> is no meaning to the idea of zombies. It's like dehydrated water.
>
> >> I am rather skeptical on substance. But I tend to believe in waves
> >> and
> >> particles, because group theory can explain them. But I don't need
> >> substance for that. And with comp, ther is no substance that we can
> >> related, even indirectly, to consciousness. I see the notion of
> >> substance as the Achilles' heel of the Aristotelian theories.
>
> > But if you are saying that zombies cannot exist,
>
> I am not saying that. I am saying that non-comp + materialism entails
> bizarre infinities and/or zombies.
>
> > doesn't that mean
> > that positing a substance that automatically is associated with a
> > particular set of functions. Otherwise you could just program
> > something to behave like a zombie.
>
> ?
> To program something acting like a zombie is the same as programming
> something to behave like a human.

So to be human really is to be a zombie, but to hallucinate that you
are not?

>
>
>
> > To say that comp prevents zombies is actually a self-defeating
> > argument I think.
>
> I might differ a little bit from Stathis on this. It is not clear that
> comp prevents zombies.
> Japanese engineers build quite sophisticate dolls for sexual purpose,
> and we might be in sincere trouble the day they fight for being
> recognize as living and conscious beings, even if we know they are
> gifted in simulating (not emulating) feelings.
> A friend of mine (engineers) to build a little cute dog-robot to
> feature in one of his piece of theater; The dog was tortured on the
> scene and crying/whining is a very convincing way. The audience was in
> shock. It was just almost impossible to not confer to the robot some
> feeling, despite the extreme simplicity of its program. Fake humans
> capable of deluding people for some great amount of time cannot be
> entirely excluded. What I do say, is that non comp entails "real" p-
> zombies (and/or other more technical absurdities).

I get that. You allow that a robot dog is going to engender human
sentimental projections, but you are saying that a sufficiently
sophisticated dogdroid should not be excluded from the possibility of
having actual feelings worthy of sentiment. I would disagree that
sophistication of design alone is going to make that difference, but
it's not to say that there is not some elevated range of our
consciousness at which that might not be true. In psychedelic states
and through monastic practices, it is not out of the question that the
sanctity of human life extends not only to vegan lifestyles and non-
violence, but perhaps a saintly reverence for all coherent structures
and pattens, even programs. I don't know if I personally am ready to
accept responsibility not to step on any nanobots, but I understand
that it's possible in principle. The problem is, at that level of non-
attachment to your native identity, who determines whether or not a
chainsaw has the right of way over someone's neck? What if I have a
great expensive machine that happens to enjoy running over people? Who
am I to say that it doesn't have the right to express itself?


>
> > It seems to violate the principle of universal
> > emulation so that you could not, for instance have one digital person
> > which was the virtualized slave of another, because the second digital
> > body would be, in effect, a zombie. This seems to inject a special
> > case of arbitrary Turing limitation. Consider the example of remote
> > desktop software, where we can shell out one computer to another. What
> > happens to the host computer's 'consciousness'? Does it not become a
> > partial zombie, unable to locally control it's behavior?
>
> In *that* sense, all bodies are zombies. A body is always a construct
> of minds. This is not obvious, and is related to the fact that comp
> makes physicalness emerging from consciousness, which emerges from the
> infinities of number relations.
>
If all bodies are zombies, then non-comp + materialism would seem to
be a foregone conclusion. It seems like you are making zombies and
infinities bad when you want to scare us but good when they are the
inevitable result of arithmetic.

>
> >>>>> There is no
> >>>>> substitution 'level', only a ratios of authenticity.
>
> >>>> ?
>
> >>> Say a plastic plant is 100% authentic at a distance of 50 years by
> >>> the
> >>> naked eye, but 20% likely to be deemed authentic at a distance of
> >>> three feet. Some people have better eyesight. Some are more familiar
> >>> with plants. Some are paying more attention. Some are on
> >>> hallucinogens, etc There is no substitution level at which a plastic
> >>> plant functions 100% as a plant from all possible observers through
> >>> all time and not actually be a genuine plant. Substitution as a
> >>> concrete existential reality is a myth. It's just a question of
> >>> arbitrarily fixing an acceptable set of perceptions by particular
> >>> categories of observers and taking it for functional equivalence.
>
> >> An entity suffering in a box, does suffer, independently of any
> >> observers.
>
> > A box can contain a body, but it's not clear that it can contain the
> > experience with that body. Sensory isolation in humans leads to rapid
> > escape into the imagining psyche. But if you want to stick with a flat
> > read of the example, we could say that the box is an observer, at
> > least to the extent that it's existence must resist the escape of the
> > trapped entity.
>
> My point is that we don't need to observe a brain for a consciousness
> existing in relation with that brain.

If you don't observe the brain, and the subject doesn't observe the
brain, then how do you know that there is a brain at all?
I think that it could disprove comp if we actually were able to do
some large scale experiment like that. I think that it does
intuitively show us why the assumption of comp may not make sense in
the real world. Substance and scale may not be real as far as
existential absolutes, but that doesn't mean that they aren't real
enough to make the difference.
It wouldn't convince me unless Uncle Paul still was satisfied after
living a few years in the digital brain, and then being put back in
his original monkey brain.
An experience of what it is like to be a collection of material
things.

>
> >> 2) It is true that computation needs much less than psyche, indeed,
> >> it
> >> does not need psyche, but that is why comp is a real explanation: it
> >> explains the existence of psyche (what the machine thinks about)
> >> without assuming psyche.
> >> You say comp is false, because you believe that we can explain psyche
> >> only by assuming psyche. What you say is "psyche cannot be explained
> >> (without psyche).
>
> > That psyche cannot be explained is only one factor, and not the most
> > important one, which leads me in the direction of comp being false.
> > Some others are:
>
> > 1) I am compelled by the symmetry and cohesiveness of a Sensorimotive-
> > Electromagnetic Perceptual Relativity rooted in matter, space, and
> > entropy as the involuted consequence of energy, time, and
> > significance.
>
> So you assume some physicalness. We are progressing.

Yes, I think that half of the universe can be described physically,
and the other half can be described experientially, and that the two
are involuted reflections of each other.
>
>
>
> > 2) I am compelled by our naive perception of being 'inside of' our
> > physical heads and bodies, rather than inside of a disembodied logical
> > process - even with a simulation hypothesis, our ability to experience
> > varying degrees of sanity and dissociation rather than a real world
> > which is indistinguishable from a structured dream.
>
> OK. Thanks for admitting this is naive.

I think comp is actually more naive (because it presumes itself post-
naive).

>
>
>
> > 3) I am compelled by the transparency of our proprioceptive
> > engagement. Even though our perception can be shown to have a
> > substitution level, our ordinary experience is quite reliable at
> > informing the psyche of it's material niche. We don't usually
> > experience dropouts or pixelation, continuity errors, etc. It's not
> > perfect, but our ability to communicate with each other across many
> > different logical encodings and substances without any other entity
> > interfering is a testament to the specificity of human consciousness
> > to the precise fusion of physical neurology and psychic unity.
>
> That's right, but I don't see why that would be false with comp,
> unless we prove that the comp physics is different from the inferred
> physics.
>
Because comp would not be essential to living a life. If it wasn't
essential, then there is a chance that it could never be discovered,
in which case you could have infinite pockets of eternal ignorance
which have no contact with the arithmetic truth they are made of. With
sensorimotive truth, that isn't possible. The 1-p native experience is
a standalone truth which nonetheless is 3-p truth permeable. Comp
makes absolute, permanent solipsism a possibility. It is ultimately a
prison theology.

>
>
> > 4) All of the aesthetic hints bound up in our fictions of the unlive
> > and the undead, as well as the stereotypes of cold, empty mechanism.
> > Consistent themes in science fiction and fantasy. Again suggesting a
> > mind-body pseudo-duality rather than an arithmetic monism.
>
> But arithmetic does explain phenomenological dualities (indeed
> octalities).

Does it describe itself as being outside of them?

>
>
>
> > 5) The clues in human development, with childhood seeing innate
> > grounding in tangible sensorimotive participation rather than
> > universal, self-driven sui generis mathematical facility. It takes
> > years for us to develop to the point where we can be taught counting,
> > addition, and multiplication.
>
> And? Don't confuse the human conception of numbers, and the numbers as
> a the intended matter subject of human studies.

So it suggests that the ground of being which is relevant to us is not
complicated abstract logic but simple concrete experience.
>
>
>
> > 6) The lack of real world arithmetic phenomena independent of matter.
>
> That is like Roger Granet's argument that if 17 exists, it has to
> exists somewhere. But this beg the question. It posits at the start
> that existence is physical existence.

You can say that existence isn't physical, but why would 17 be any
more likely to exist without physical existence? You posit at the
start that countingness exists independently of anything to actually
count or anyone to count it. I understand the appeal, and I sort of
started my TOE from there with 'patterns' being alive - but if that's
all there was to it, there would be no point in having anything that
seems physical at all.
What is the proof that arithmetic is more primitive than perception or
substance?

>
>
> >>>>> I have confidence in the relation between
> >>>>> comp and non-comp. That is the invariance, the reality, and a
> >>>>> theory
> >>>>> of Common Sense.
>
> >>>> comp gives a crucial role to no-comp.
>
> >>> Meaning that it is a good host to us as guests in it's universe. I
> >>> don't think that's the case. This universe is home to us all and we
> >>> are all guests as well (guests of our Common Sense)
>
> >> ?
>
> > It makes us strangers in an arithmetic universe.
>
> Not necessarily.

Why not?
If it starts from there and ends there, what is the point of physical
reality at all?

>
>
>
> >>>>> It needs
> >>>>> fluids - water, cells.
>
> >>>> Clothes.
>
> >>> Would you say that H2O is merely the clothes of water, and that
> >>> water
> >>> could therefore exist in a 'dehydrated' form?
>
> >> Sure. I do this in dreams. Virtual water gives virtual feeling of
> >> wetness with great accuracies.
>
> > Virtual water doesn't do all of the things that real water does
> > though. It's just a dynamic image and maybe some tactile sense. It
> > doesn't have to boil or evaporate, doesn't quench thirst, etc.
>
> I am used to do coffee and tea in dreams.

But you don't think that coffee can be accurately weighed or
chemically analyzed, right? You don't think a dream thermometer is
going to accurately measure it's temperature?

>
> > I agree
> > that some of our sense of water is reproduced locally in the psyche,
> > but it is clearly a facade of H2O.
>
> But that is enough. Actually we do live with that facade, even when we
> drink real water, once you accept that the brain is a representational
> machine. The "H2O" is not the subject of the substitution. The human is.

The facade of water won't let you survive in the desert. The brain has
mechanisms but they are not doing any representing, rather they are
the physical embodiment of a sensorimotive presentation. The
presentation is a local isomorph of other presentations, but there is
no transduction of electric signals into color images for instance.
The signals are the physical shadow of the experience of the images.

>
>
>
> >>>>> Something that lives and dies and makes a mess.
>
> >>>> Universal machine are quite alive, and indeed put the big mess in
> >>>> platonia.
>
> >>> What qualities of UMs make them alive?
>
> >> The fact that they are creative, reproduce, transform themselves, are
> >> attracted by God, sometime repulsed by God also, and that they can
> >> diagonalize against all normative theories made about them. And many
> >> more things.
>
> > It sounds worthwhile but I would need to see some demos and
> > experiments dumbed down for laymen to have an opinion.
>
> I did it, from the implementation of the modal logic in my long
> version thesis. But the "dumbing" makes it non convincing, unless you
> understand the program, and for this you need to dig harder on
> mathematical logic.

Maybe it's clarity that makes it non convincing? It's funny that you
are saying I need to be much clearer but everything that I ask about
your theory just gets me referred back to the academic source
documents.
It's not a proof though. It's a theological possibility.

>
> > You just said all of
> > this great stuff that UM can do which is just like us, and then your
> > one example of this is you and me ourselves?
>
> I was addressing different points.

Hmm.
>
> > What is the point of
> > saying that we are like ourselves and what would that have to do with
> > supporting mechanism?
>
> I was illustrating that a theory like mechanism does not have to
> eliminate the person, which is what happen when materialist defend
> mechanism.

I think it retains the person in name only. The content is absent.
>
>
>
> >>>>> How does the brain understand these things if it has no access to
> >>>>> the
> >>>>> papers?
>
> >>>> Comp explains exactly how things like papers emerge from the
> >>>> computation. The explanation is already close to Feynman
> >>>> formulation
> >>>> of QM.
>
> >>> Unfortunately this sounds to me like "Read the bible and your
> >>> questions will be answered."
>
> >> Read sane04.http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHALAbstract
> >> ...
>
> > I have. I like it but I can only get so far and I like my own ideas
> > better.
>
> Then I encourage you to make them *much* clearer.
>
I'm certainly open to clarifying the language, but the ideas refer to
phenomena which are actually unclear. The universe is not composed
only of literal facts - it is a continuum of fictions which are sharp
and clear on one end, fuzzy and entangled on the other.
>
>
> >>>>>> But you don't seem serious in "arguing" against comp, and
> >>>>>> admitting
> >>>>>> you don't know anything in computer science.
>
> >>>>> Oh I freely admit that I don't know anything in computer
> >>>>> science. My
> >>>>> whole point is that computer science only relates to half of
> >>>>> reality.
>
> >>>> I don't know anything about X. My whole point is that X only do
> >>>> this.
> >>>> But if you admit knowing nothing about X, how can you derive
> >>>> anything
> >>>> about X.
> >>>> You are just confessing your prejudice.
>
> >>> I don't know anything about ship building but I know that it only
> >>> concerns seafaring and not aerospace. I think that being a master
> >>> shipwright could very well present a significant obstacle to
> >>> pioneering in aerospace.
>
> >> That's not an argument. At the most a hint for low level
> >> substitution.
>
> > It's an example of why arguments from authority to not compel me in
> > this area.
>
> They does not compel me in any theoretical area.
>
>
I'm ok with someone arguing from authority in an area where I have no
opinion of my own cause me to doubt them.
>
> >>>>> I'm not trying to make the universe fit into a computer science
> >>>>> theory. I only argue against comp because it's what is distracting
> >>>>> you
> >>>>> from seeing the bigger picture.
>
> >>>> I show, in short that comp leads to Plotinus. If that is not a big
> >>>> picture!
> >>>> Comp explains conceptually, and technically, the three Gods of the
> >>>> greek, the apparition of LUMs and their science and theologies, the
> >>>> difference between qualia and quanta, sensation and perception,
> >>>> perception and observation.
>
> >>> I believe you but i get to those things without vaporizing
> >>> substance.
>
> >> Which means you are affectively attached to the bullet of Aristotle.
> >> Substance is an enigma. Something we have to explain, ontologically,
> >> or epistemologically.
>
> > I think that I have explained substance. It is the opposite of
> > perception over time. Perceptual obstacles across space. Together they
> > form an involuted pseudo-dualism.
>
> ?
Substance is an awareness of unawareness.
>
>
>
> >>>> You just criticize a theory that you admit knowing nothing about.
> >>>> This
> >>>> is a bit weird.
>
> >>> My purpose is not to criticize the empire of comp, it is to point to
> >>> the virgin continent of sense.
>
> >> So you should love comp, because it points on the vast domain of
> >> machine's sense and realities.
>
> > I do love it in theory. It's a whole new frontier to explore. It's
> > just not the one I'm interested or qualified to explore.
>
> But then stay neutral on it.
>
I can't because it's presenting itself as an obstacle to a deeper
understanding of awareness.
It's a branch of math that infers introspection. The self being
referred to logically is not a sensorimotive self, it is a
computational placeholder. A simplistic variable which embodies
localized emptiness.
>
>
> >>> The 3-p
> >>> view of schematizing the belief of a thing is a second order
> >>> conception to the 1-p primitive of what it is to feel that one *is*.
>
> >> Well, not in the classical theory of beliefs and knowledge.
>
> >>> It's an experience with particular features - a sensory input and a
> >>> motive participation. Without that foundation, there is nothing to
> >>> model.
>
> >> That's unclear. The "p" in Bp & p might play that role, as I thought
> >> you grasped above.
>
> > You can't start with doubting the self, because logically that would
> > invalidate the doubt and fall into infinite regress. It's not even
> > possible to consider because the Ur-arithmetic would have nothing to
> > experience it.
>
> The 1-self is not doubtable, OK. The 3-self is.

Right, that's what I'm saying. Why not start with the undoubtable?
Let's figure out what that is made of first. I think that I might have
done that. It's sensorimotive perception-participation. The object of
that perception subject to interpretation, doubt, computation, etc, to
varying extents.
You don't see that as the antithesis to human subjectivity? You are
saying that machines are correct because the smartest ones all think
alike in that they all refuse to think about the same things. It seems
so obviously misguided to me. It's like something out of Being There.
Don't you see that these machines aren't wise philosophers who all
happen to agree, they are just Chauncey the gardener tapping into an
oracle of pure emptiness...not the essence of psyche, but the essence
of existential entropy.
I really don't see any sign of that kind of thing being possible from
the technology we have created so far.
I wouldn't say that it's completely artificial. Religion doesn't lead
to technology which is empirically powerful. I agree that there are
important similarities, and an even more important symmetry, but there
are non-artificial differences too.
Which is why we would not want to count on being able to live in a
brain which is constructed out of nothing but one of those subsets.
>
> >>> I say that
> >>> substitution level does not apply. I think that to prove
> >>> substitution
> >>> level exists
>
> >> Comp implies that no one can prove it exists. No machine can know for
> >> sure its substitution level, even after a successful teleportation.
> >> She might believe that she has 100% survived but suffer from
> >> anosognosia.
>
> > I can understand what you are saying, and I agree that it is a good
> > way of modeling why a self-referencing entity would not be able to get
> > behind itself, but it seems like a contradiction. If you say that we
> > are machines, then you are saying that we cannot know for sure our
> > substitution level, which is exactly what you are criticizing me on.
>
> You confuse indeterminate level, and infinitely low level.

If you can't determine it, how do you know it's not infinitely low?

>
> > If a machine cannot know for sure their subst level, does it know for
> > sure that the level is not infinite?
>
> No. Comp justifies this precisely. Comp is a theological principle.
> Comp ethics is that you have the right to say "no" to the doctor. We
> cannot force comp to someone else.

That sounds like a policy or best practices oath taken by comp
practitioners rather than an actual consequence of comp.

>
> > If not, then comp itself is not
> > Turing emulable?
>
> Comp is not a person. But I get your idea. Comp, like numbers needs to
> be postulated, and cannot be derived from any theory not based on some
> act of faith. It *is* a theology. It is a scientific theology, and
> this means only that it is doubtable. If some one say that comp is the
> truth, then, just according to comp: he is lying. It is a possibility,
> like any scientific theory. Parctical comp, if that appears, can only
> be a private concern, between you, your doctor, and perhaps your
> favorite shaman.
>
I don't understand how you can presume that a particular morality
automatically comes with comp. It's like saying that electricity can
only be used to benefit mankind or something. I think you're avoiding
the question too. I was just pointing out that if substitution level
is indeterminate, then why would that indeterminacy be Turing
emulable?
But where makes something become uncomputable when it crosses the
threshold above sigma_1? Why does it happen?
>
>
> >>>>> so it's not
> >>>>> applicable for describing certain kinds of consciousness where
> >>>>> non-
> >>>>> comp is more developed.
>
> >>>> Consciousness and matter are shown by comp to be highly non
> >>>> computable. So much that the mind-body problem is transformed
> >>>> into a
> >>>> problem of justifying why the laws of physics seems to be
> >>>> computable.
>
> >>> I think they not only seem to be computable but they are computable,
> >>> and that this is due to how sensorimotive orientation works.
>
> >> Hmm... Then you can compute if you will see a photon in the up state
> >> starting from the superposition (up + down)?
>
> > No, a photon (if it existed, which I don't think it does) is
> > completely outside of our perceptual inertial frame. Which is why it
> > seems to do unusual things because we are seeing it secondhand though
> > photomultipliers or other equipment which cannot report to us anything
> > which cannot be represented in the very limited common sense we share
> > with glass and steel. Within our naive perceptual niche, our laws of
> > physics are computable.
>
> Replace photon by electron, if you want. de Broglie got the Nobel
> prize for arguing convincingly that massive matter is as weird than
> the non massive light.

I think electrons might be only more objective than photons, but also
not strictly speaking, objects or objective phenomena. They are
intersubjective atomic events with consequences that are more tangible
to our measuring instruments than other microcosmic phenomena, but
still maybe not independent of atoms. Can an electric charge be
observed in a vacuum independent of matter?

Craig

Bruno Marchal

unread,
Sep 26, 2011, 6:55:10 AM9/26/11
to everyth...@googlegroups.com

On 26 Sep 2011, at 03:09, Craig Weinberg wrote:

> On Sep 25, 10:33 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>> On 25 Sep 2011, at 02:51, Craig Weinberg wrote:
>>
>>> (next installment)
>>
>>> On Sep 23, 3:17 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
>>
>>>> On 23 Sep 2011, at 02:42, Craig Weinberg wrote:
>>
>>>>> It is a comparison made by a third
>>>>> person observer of a human presentation against their expectations
>>>>> of
>>>>> said human presentation. Substitution 'level' similarly implies
>>>>> that
>>>>> there is an objective standard for expectations of humanity. I
>>>>> don't
>>>>> think that there is such a thing.
>>
>>>> It all depend what you mean by "objective standard".
>>
>>> That there is some kind of actual set of criteria which make the
>>> difference between human and non human.
>>
>> You really frighten me. The last time I read something similar was
>> when I read My Kampf by A. Hitler, notably on the handicapped people,
>> homosexual, jews, etc.
>
> Oof. We've hit the Godwin Law Limit. http://en.wikipedia.org/wiki/Godwin%27s_law
>
> But I'm the one saying that there isn't a substitution level.
> You've
> been the one telling me that there must be.

So you are the one saying that a human with a prosthetic brain is no
more human.

> I'm saying that we are
> both mechanical and non-mechanical,

That is a contradiction. Would you say "yes and no" to the doctor?

> but you are saying that all non-
> mechanical must be reducible to the consequences of mechanism. That
> sounds much more like epistemological fascism to me.

The 3-I is mechanical (comp assumption), and the 1-I is not
(consequence).
Why postulate something when we can derive it from simpler assumption.
It is not fascism, it is Occam razor.

I agree. This follows from comp. but the artifical brain can do as
much as the biological brain in making possible for a genuine
consciousness to manifest itself locally. The 1-p related to that
brain will have its "futures" deterline by the statistics on all
computations though.

> The brain is the common sense of what we are and the psyche is the
> uncommon sense of what we are. They overlap through their sharing of
> sense, and through their symmetrical divergence from each other, and
> they underlap through their separate developmental encounters through
> chance.

Perhaps. But not relevant to negate comp.


>
> If you build a brain based upon only the common sense without
> factoring in the symmetrical divergence and the underlapping non-
> sense, then I think that you get a device with all of the capability
> to feel and think as a complicated alarm clock.

You beg the question. you just say: my personal opinion is that a
brain is not a machine.

Not at all. Comp justifies it in the sense that it is a logical
consequence. No assumptions are needed above CT+YD. (Church thesis +
"yes doctor").

> that in fact feeling is no different from computing.

This shows you have not study the paper. Computating is given by the
Bp (with p sigma_1). feelings are given by Bp & Dt & p, with p
sigma_1. This obeys quite different logics.


> It's a fair theoretical-philosophical proposition, but it insists upon
> a flat computational read of reality to begin with, thus disqualifying
> any possibility for 1-p authority.

"1p-authority" is preserved and explained by all modalities with "&
p" (S4Grz, x, x*, x1, x1*).

>>
>>
>>
>>>>> If you don't assume
>>>>> that substance can be separated from function completely, then
>>>>> there
>>>>> is no meaning to the idea of zombies. It's like dehydrated water.
>>
>>>> I am rather skeptical on substance. But I tend to believe in waves
>>>> and
>>>> particles, because group theory can explain them. But I don't need
>>>> substance for that. And with comp, ther is no substance that we can
>>>> related, even indirectly, to consciousness. I see the notion of
>>>> substance as the Achilles' heel of the Aristotelian theories.
>>
>>> But if you are saying that zombies cannot exist,
>>
>> I am not saying that. I am saying that non-comp + materialism entails
>> bizarre infinities and/or zombies.
>>
>>> doesn't that mean
>>> that positing a substance that automatically is associated with a
>>> particular set of functions. Otherwise you could just program
>>> something to behave like a zombie.
>>
>> ?
>> To program something acting like a zombie is the same as programming
>> something to behave like a human.
>
> So to be human really is to be a zombie, but to hallucinate that you
> are not?

This does not follow. What I said is a direct consequence of the
definition of zombie. It does not follow that human are zombie. It
suggests that a program behaving like a human is NOT a zombie. But
with non-comp, zombie can make sense.

OK. But this is not relevant for the negation of comp. The same
problem would occur with Aliens.
if some aliens invade earth to feed on humans, by survival necessity,
I guess they have the right. Of course we have the right to defend
ourselves, too.

>
>
>>
>>> It seems to violate the principle of universal
>>> emulation so that you could not, for instance have one digital
>>> person
>>> which was the virtualized slave of another, because the second
>>> digital
>>> body would be, in effect, a zombie. This seems to inject a special
>>> case of arbitrary Turing limitation. Consider the example of remote
>>> desktop software, where we can shell out one computer to another.
>>> What
>>> happens to the host computer's 'consciousness'? Does it not
>>> become a
>>> partial zombie, unable to locally control it's behavior?
>>
>> In *that* sense, all bodies are zombies. A body is always a construct
>> of minds. This is not obvious, and is related to the fact that comp
>> makes physicalness emerging from consciousness, which emerges from
>> the
>> infinities of number relations.
>>
> If all bodies are zombies, then non-comp + materialism would seem to
> be a foregone conclusion. It seems like you are making zombies and
> infinities bad when you want to scare us but good when they are the
> inevitable result of arithmetic.

With comp zombie does not make sense. And infinities are the usual one
of computer science, and the one coming from the first person
indeterminacy.

We don't know that. We can bet on it, like when saying "yes" to a
doctor.

Of course. But that form of "realness" might be (and is, assuming
comp) epistemological. Not substantial.

That would not be a proof, but it is OK to be very cautious.
There are no proof that comp is correct. We can only assume it.

This beg the question. Define material. Is it "primitively material"
or not?

Current evidence is that nature has already bet on comp. So comp might
be an essential component of life. By eating and defecating, we do
replace our material constitution all the time. We are already
immaterial patterns.

> If it wasn't
> essential, then there is a chance that it could never be discovered,
> in which case you could have infinite pockets of eternal ignorance
> which have no contact with the arithmetic truth they are made of. With
> sensorimotive truth, that isn't possible. The 1-p native experience is
> a standalone truth which nonetheless is 3-p truth permeable. Comp
> makes absolute, permanent solipsism a possibility. It is ultimately a
> prison theology.

I can give sense to this. But this shows comp a bit frightening, not
that comp is false.

>
>>
>>
>>> 4) All of the aesthetic hints bound up in our fictions of the unlive
>>> and the undead, as well as the stereotypes of cold, empty mechanism.
>>> Consistent themes in science fiction and fantasy. Again suggesting a
>>> mind-body pseudo-duality rather than an arithmetic monism.
>>
>> But arithmetic does explain phenomenological dualities (indeed
>> octalities).
>
> Does it describe itself as being outside of them?

Not really. Arithmetical truth is the trivial ontic modalities Vp <->
p. But it is not nameable by the machine. It plays the role of "God",
in many sense. It is not outside Arithmetic, but it is "outside" all
arithmetical machine.


>
>>
>>
>>
>>> 5) The clues in human development, with childhood seeing innate
>>> grounding in tangible sensorimotive participation rather than
>>> universal, self-driven sui generis mathematical facility. It takes
>>> years for us to develop to the point where we can be taught
>>> counting,
>>> addition, and multiplication.
>>
>> And? Don't confuse the human conception of numbers, and the numbers
>> as
>> a the intended matter subject of human studies.
>
> So it suggests that the ground of being which is relevant to us is not
> complicated abstract logic but simple concrete experience.

The ground is elementary number relation. Not experience. experience
and matter are emergent concept there. this is why comp makes
arithmetic the sufficient monist reality. Consciousness and matter are
internal perspective.

>>
>>
>>
>>> 6) The lack of real world arithmetic phenomena independent of
>>> matter.
>>
>> That is like Roger Granet's argument that if 17 exists, it has to
>> exists somewhere. But this beg the question. It posits at the start
>> that existence is physical existence.
>
> You can say that existence isn't physical, but why would 17 be any
> more likely to exist without physical existence?

You can't because physical appearance is a logical consequence of "17
and alike".
But we don't need primitive physical existence. We don't need, nor
use, an *assumption* that a basic physical reality primitively exist.
A bit like evolution makes useless the assumption that humans exist in
some privileged ways.

> You posit at the
> start that countingness exists independently of anything to actually
> count or anyone to count it. I understand the appeal, and I sort of
> started my TOE from there with 'patterns' being alive - but if that's
> all there was to it, there would be no point in having anything that
> seems physical at all.

UDA illustrates that you are mistaken there. The physical worlds
really exists ... in the head of Löbian machine, and this in a way
making mechanism testable.

That we can explain the appearance of perception and substance from
arithmetic, but we cannot do the reciprocal. In fact arithmetic cannot
be explained with less than arithmetic. This is non trivial to prove,
but is well known by mathematical logicians.

>
>>
>>
>>>>>>> I have confidence in the relation between
>>>>>>> comp and non-comp. That is the invariance, the reality, and a
>>>>>>> theory
>>>>>>> of Common Sense.
>>
>>>>>> comp gives a crucial role to no-comp.
>>
>>>>> Meaning that it is a good host to us as guests in it's universe. I
>>>>> don't think that's the case. This universe is home to us all and
>>>>> we
>>>>> are all guests as well (guests of our Common Sense)
>>
>>>> ?
>>
>>> It makes us strangers in an arithmetic universe.
>>
>> Not necessarily.
>
> Why not?

"Stranger" is a subjective local notion. With "time", we can get
familiarity. We can make friends with arithmetical creatures.

It has to appear. What is the point of doing babies? What is the point
of Saturn annulus?
What is the point of your point?
The physical reality is a consequence of 1+1=2 (to be short).
Independently if we like it or not.

>
>>
>>
>>
>>>>>>> It needs
>>>>>>> fluids - water, cells.
>>
>>>>>> Clothes.
>>
>>>>> Would you say that H2O is merely the clothes of water, and that
>>>>> water
>>>>> could therefore exist in a 'dehydrated' form?
>>
>>>> Sure. I do this in dreams. Virtual water gives virtual feeling of
>>>> wetness with great accuracies.
>>
>>> Virtual water doesn't do all of the things that real water does
>>> though. It's just a dynamic image and maybe some tactile sense. It
>>> doesn't have to boil or evaporate, doesn't quench thirst, etc.
>>
>> I am used to do coffee and tea in dreams.
>
> But you don't think that coffee can be accurately weighed or
> chemically analyzed, right? You don't think a dream thermometer is
> going to accurately measure it's temperature?

It depends which of the dreams. In the dream we are plausibly sharing
right now, I guess we can make such more precise analysis.

>
>>
>>> I agree
>>> that some of our sense of water is reproduced locally in the psyche,
>>> but it is clearly a facade of H2O.
>>
>> But that is enough. Actually we do live with that facade, even when
>> we
>> drink real water, once you accept that the brain is a
>> representational
>> machine. The "H2O" is not the subject of the substitution. The
>> human is.
>
> The facade of water won't let you survive in the desert. The brain has
> mechanisms but they are not doing any representing, rather they are
> the physical embodiment of a sensorimotive presentation.

You have escape all attempt to define what you mean by that.

> The
> presentation is a local isomorph of other presentations, but there is
> no transduction of electric signals into color images for instance.
> The signals are the physical shadow of the experience of the images.
>
>>
>>
>>
>>>>>>> Something that lives and dies and makes a mess.
>>
>>>>>> Universal machine are quite alive, and indeed put the big mess in
>>>>>> platonia.
>>
>>>>> What qualities of UMs make them alive?
>>
>>>> The fact that they are creative, reproduce, transform themselves,
>>>> are
>>>> attracted by God, sometime repulsed by God also, and that they can
>>>> diagonalize against all normative theories made about them. And
>>>> many
>>>> more things.
>>
>>> It sounds worthwhile but I would need to see some demos and
>>> experiments dumbed down for laymen to have an opinion.
>>
>> I did it, from the implementation of the modal logic in my long
>> version thesis. But the "dumbing" makes it non convincing, unless you
>> understand the program, and for this you need to dig harder on
>> mathematical logic.
>
> Maybe it's clarity that makes it non convincing? It's funny that you
> are saying I need to be much clearer but everything that I ask about
> your theory just gets me referred back to the academic source
> documents.

Because the work has already been done. Comp needs computer science,
you do have a big work to do for the understanding of its consequence.
You are the one pretending that comp is false, so you have to do the
work. It happens that all your argulent against comp can already be
defeated by universal machines, who actually shows that you have a
rather good intuition of what is going one. To bad you want it to be a
non-comp theory. But that is you problem, I try just to help.

Comp is a theological possibility.
UDA is an informal, yet rigorous, proof.
AUDA is not a proof at all, but an arithmetical rendering of the UDA
consequences in the language of machine, which makes the extraction of
physics technical and verifiable.


>
>>
>>> You just said all of
>>> this great stuff that UM can do which is just like us, and then your
>>> one example of this is you and me ourselves?
>>
>> I was addressing different points.
>
> Hmm.
>>
>>> What is the point of
>>> saying that we are like ourselves and what would that have to do
>>> with
>>> supporting mechanism?
>>
>> I was illustrating that a theory like mechanism does not have to
>> eliminate the person, which is what happen when materialist defend
>> mechanism.
>
> I think it retains the person in name only. The content is absent.

The question is why?

>>
>>
>>
>>>>>>> How does the brain understand these things if it has no access
>>>>>>> to
>>>>>>> the
>>>>>>> papers?
>>
>>>>>> Comp explains exactly how things like papers emerge from the
>>>>>> computation. The explanation is already close to Feynman
>>>>>> formulation
>>>>>> of QM.
>>
>>>>> Unfortunately this sounds to me like "Read the bible and your
>>>>> questions will be answered."
>>
>>>> Read sane04.http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHALAbstract
>>>> ...
>>
>>> I have. I like it but I can only get so far and I like my own ideas
>>> better.
>>
>> Then I encourage you to make them *much* clearer.
>>
> I'm certainly open to clarifying the language, but the ideas refer to
> phenomena which are actually unclear. The universe is not composed
> only of literal facts - it is a continuum of fictions which are sharp
> and clear on one end, fuzzy and entangled on the other.

The more a subject matter is difficult and unclear, the more you have
to be clear on it.

Not bad. That cannot be a primitive substance, then. Looks like the
comp notion of substance (like the formal BD# = B~B#), ans also like
the substance extracted in UDA, in informal term. Very good! But not a
problem for comp, on the contrary (if substance is awareness of
something, matter is already immaterial, unless you pretend that
awareness is material itself, which I'm afarid is what you are thinking.

>>
>>
>>
>>>>>> You just criticize a theory that you admit knowing nothing about.
>>>>>> This
>>>>>> is a bit weird.
>>
>>>>> My purpose is not to criticize the empire of comp, it is to
>>>>> point to
>>>>> the virgin continent of sense.
>>
>>>> So you should love comp, because it points on the vast domain of
>>>> machine's sense and realities.
>>
>>> I do love it in theory. It's a whole new frontier to explore. It's
>>> just not the one I'm interested or qualified to explore.
>>
>> But then stay neutral on it.
>>
> I can't because it's presenting itself as an obstacle to a deeper
> understanding of awareness.

You are making it an obstacle. I think the obstacle is not mechanism,
but the reductionist (pre-Gödelian) conception of mechanism.

Why?

>>
>>
>>>>> The 3-p
>>>>> view of schematizing the belief of a thing is a second order
>>>>> conception to the 1-p primitive of what it is to feel that one
>>>>> *is*.
>>
>>>> Well, not in the classical theory of beliefs and knowledge.
>>
>>>>> It's an experience with particular features - a sensory input
>>>>> and a
>>>>> motive participation. Without that foundation, there is nothing to
>>>>> model.
>>
>>>> That's unclear. The "p" in Bp & p might play that role, as I
>>>> thought
>>>> you grasped above.
>>
>>> You can't start with doubting the self, because logically that would
>>> invalidate the doubt and fall into infinite regress. It's not even
>>> possible to consider because the Ur-arithmetic would have nothing to
>>> experience it.
>>
>> The 1-self is not doubtable, OK. The 3-self is.
>
> Right, that's what I'm saying. Why not start with the undoubtable?

This is what I do for the explanation of consciousness and matter,
but, the undoubtable itself can be justify in arithmetic.

I am not saying that machines are correct. I limit myself to the study
of (arithmetically) correct machines, because it is all what we need
to explain matter and consciousness, but then the theory explains why
most machines cannot stay correct for long.


> It seems
> so obviously misguided to me. It's like something out of Being There.
> Don't you see that these machines aren't wise philosophers who all
> happen to agree, they are just Chauncey the gardener tapping into an
> oracle of pure emptiness...not the essence of psyche, but the essence
> of existential entropy.

They agree only on arithmetic. Even arithmetically correct machines
develop disagreement on many matters.

Let us hope it remains so.

Comp leads to theotechnologies. Biotechnology too.

That's too late. Nature, very plausibly has already made the step.

>>
>>>>> I say that
>>>>> substitution level does not apply. I think that to prove
>>>>> substitution
>>>>> level exists
>>
>>>> Comp implies that no one can prove it exists. No machine can know
>>>> for
>>>> sure its substitution level, even after a successful teleportation.
>>>> She might believe that she has 100% survived but suffer from
>>>> anosognosia.
>>
>>> I can understand what you are saying, and I agree that it is a good
>>> way of modeling why a self-referencing entity would not be able to
>>> get
>>> behind itself, but it seems like a contradiction. If you say that we
>>> are machines, then you are saying that we cannot know for sure our
>>> substitution level, which is exactly what you are criticizing me on.
>>
>> You confuse indeterminate level, and infinitely low level.
>
> If you can't determine it, how do you know it's not infinitely low?

I don't know.
Nobody can ever know that, if it is true. If it false, then we might
know that it is false.


>
>>
>>> If a machine cannot know for sure their subst level, does it know
>>> for
>>> sure that the level is not infinite?
>>
>> No. Comp justifies this precisely. Comp is a theological principle.
>> Comp ethics is that you have the right to say "no" to the doctor. We
>> cannot force comp to someone else.
>
> That sounds like a policy or best practices oath taken by comp
> practitioners rather than an actual consequence of comp.

No. It is a consequence of comp.

>
>>
>>> If not, then comp itself is not
>>> Turing emulable?
>>
>> Comp is not a person. But I get your idea. Comp, like numbers needs
>> to
>> be postulated, and cannot be derived from any theory not based on
>> some
>> act of faith. It *is* a theology. It is a scientific theology, and
>> this means only that it is doubtable. If some one say that comp is
>> the
>> truth, then, just according to comp: he is lying. It is a
>> possibility,
>> like any scientific theory. Parctical comp, if that appears, can only
>> be a private concern, between you, your doctor, and perhaps your
>> favorite shaman.
>>
> I don't understand how you can presume that a particular morality
> automatically comes with comp. It's like saying that electricity can
> only be used to benefit mankind or something. I think you're avoiding
> the question too. I was just pointing out that if substitution level
> is indeterminate, then why would that indeterminacy be Turing
> emulable?

?
Comp is the assumption that there is a level of substitution where we
are Turing emulable.
So by definition we are turing emulable at that level.

This is elementary computer science. I do have explained this already,
but it is ordinary math. Search with the term "diagonalization" in the
archive. I have also explained all this in the entheogen forum
recently. Read the thread "Simulated reality":
http://www.entheogen.com/forum/showthread.php?t=27553
(you can also read the UDA step 0, 1, 2, ... threads).
Or ask again if you are really interested, and I do it when I have
more time. I will be slowed down soon by my job, so I will soon
shortened my comments and delay my answers. Sorry for that.

I guess no, but the word "matter" is unclear, even in the physicalist
philosophy. Most physicists consider photon as being as much material
than electron, and electron in QED are not really separable from
photon. But this digresses from your explanation that comp is false.

Bruno

http://iridia.ulb.ac.be/~marchal/

Craig Weinberg

unread,
Sep 26, 2011, 5:01:33 PM9/26/11
to Everything List
On Sep 25, 7:39 pm, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Sat, Sep 24, 2011 at 5:24 AM, Craig Weinberg <whatsons...@gmail.com> wrote:
> >> Do you agree or don't you that the observable (or public, or third
> >> person) behaviour of neurons can be entirely explained in terms of a
> >> chain of physical events?
>
> > No, nothing can be *entirely* explained in terms of a chain of
> > physical events in the way that you assume physical events occur.
> > Physical events are a shared experiences, dependent upon the
> > perceptual capabilities and choices of the participants in them. That
> > is not to say we that the behavior of neurons can't be *adequately*
> > explained for specific purposes: medical, biochemical,
> > electromagnetic, etc.
>
> OK, so you agree that the *observable* behaviour of neurons can be
> adequately explained in terms of a chain of physical events. The
> neurons won't do anything that is apparently magical, right?

Are not all of our observations observable behaviors of neurons?
You're not understanding how I think observation works. There is no
such thing as an observable behavior, it's always a matter of
observable how, and by who? If you limit your observation of how
neurons behave to what can be detected by a series of metal probes or
microscopic antenna, then you are getting a radically limited view of
what neurons are and what they do. You are asking a blind man what the
Mona Lisa looks like by having him touch the paint, then making a
careful impression of his fingers, and then announcing that the Mona
Lisa can only do what fingerpainting can do, and that inferring
anything beyond the nature of plain old paint to the Mona Lisa is
magical. No. It doesn't work that way. A universe where nothing more
than paint exists has no capacity to describe an intentional, higher
level representation through a medium of paint. The dynamics of paint
alone do not describe their important but largely irrelevant role to
creating the image.

>
> >> At times you have said that thoughts, over
> >> and above physical events, have an influence on neuronal behaviour.
> >> For an observer (who has no access to whatever subjectivity the
> >> neurons may have) that would mean that neurons sometimes fire
> >> apparently without any trigger, since if thoughts are the trigger this
> >> is not observable.
>
> > No. Thoughts are not the trigger of physical events, they are the
> > experiential correlate of the physical events. It is the sense that
> > the two phenomenologies make together that is the trigger.
>
> >> If, on the other hand, neurons do not fire in the
> >> absence of physical stimuli (which may have associated with them
> >> subjectivity - the observer cannot know this)
>
> > We know that for example, gambling affects the physical behavior of
> > the amygdala. What physical force do you posit that emanates from
> > 'gambling' that penetrates the skull and blood brain barrier to
> > mobilize those neurons?
>
> The skull has various holes in it (the foramen magnum, the orbits,
> foramina for the cranial nerves) through which sense data from the
> environment enters and, via a series of neural relays, reaches the
> amygdala and other parts of the brain.

What is 'sense data' made of and how does it get into 'gambling'?

>
> >> But if thoughts influence behaviour and thoughts are not observed,
> >> then observation of a brain would show things happening contrary to
> >> physical laws,
>
> > No. Thought are not observed by an MRI. An MRI can only show the
> > physical shadow of the experiences taking place.
>
> That's right, so everything that can be observed in the brain (or in
> the body in general) has an observable cause.

Not at all. The amygdala's response to gambling cannot be observed on
an MRI. We can only infer such a cause because we a priori understand
the experience of gambling. If we did not, of course we could not
infer any kind of association with neural patterns of firing with
something like 'winning a big pot in video poker'. That brain activity
is not a chain reaction from some other part of the brain. The brain
is actually responding to the sense that the mind is making of the
outside world and how it relates to the self. It is not going to be
predictable from whatever the amygala happens to be doing five seconds
or five hours before the win.

>
> >>such as neurons apparently firing for no reason, i.e.
> >> magically. You haven't clearly refuted this, perhaps because you can
> >> see it would result in a mechanistic brain.
>
> > No, I have refuted it over and over and over and over and over. You
> > aren't listening to me, you are stuck in your own cognitive loop.
> > Please don't accuse me of this again until you have a better
> > understanding of what I mean what I'm saying about the relationship
> > between gambling and the amygdala.
>
> > "We cannot solve our problems with the same thinking we used when we
> > created them" - A. Einstein.
>
> You have not answered it. You have contradicted yourself by saying we
> *don't* observe the brain doing things contrary to physics and we *do*
> observe the brain doing things contrary to physics.

We don't observe the Mona Lisa doing things contrary to the properties
of paint, but we do observe the Mona Lisa as a higher order experience
manifested through paint. It's the same thing. Physics doesn't explain
the psyche, but psyche uses the physical brain in the ordinary
physical ways that the brain can be used.

>You seem to
> believe that neurons in the amygdala will fire spontaneously when the
> subject thinks about gambling, which would be magic.

You don't understand that you are arguing against neuroscience and
common sense. Of course you can manually control your electrochemical
circuits with thought. That's what all thinking is. It's not that the
amygdala fires spontaneously, it's that the thrills and chills of
risktaking *are* the firing of the amygdala. You seem to be saying
that the brain has our entire life planned out for us in advance as
some kind of meaningless encephalographic housekeeping exercise where
we have no ability to make ourselves horny by thinking about sex or
hungry by thinking about food, no capacity to do or say things based
upon the realities outside of our skull rather than the inside.

>Neurons only fire
> in response to a physical stimulus.

Absurd. Is there a physical difference between a letter written in
Chinese and one written in English...some sort of magic neurochemical
that wafts off of the Chinese ink that prevents my cortex from parsing
the characters?

> That the physical stimulus has
> associated qualia is not observable:
> a scientist would see the neuron
> firing, explain why it fired in physical terms, and then wonder as an
> afterthought if the neuron "felt" anything while it was firing.

Which is why that approach is doomed to failure. There is no point to
the brain other than to help process qualia. Very little of the brain
is required for a body to survive. Insects have brains, and they
survive quite well.

>
> >> A neuron has a limited number of duties: to fire if it sees a certain
> >> potential difference across its cell membrane or a certain
> >> concentration of neurotransmitter.
>
> > That is a gross reductionist mispresentation of neurology. You are
> > giving the brain less functionality than mold. Tell me, how does this
> > conversation turn into cell membrane potentials or neurotransmitters?
>
> Clearly, it does, since this conversation occurs when the neurons in
> our brains are active.

My God. You are unbelievable. I give you a straightforward, unarguably
obvious example of a phenomenon which obviously has absolutely nothing
to do with cellular biology but is nonetheless controlling the
behavior of neurological cells, and you answer that that it must be
biological anyways. Your position, literally, is that 'I can't be
wrong, because I already know that I am right.'

>The important functionality of the neurons is
> the action potential, since that triggers other neurons and ultimately
> muscle. The complex cellular apparatus in the neuron is there to allow
> this process to happen, as the complex cellular apparatus in the
> thyroid is to enable secretion of thyroxine. An artificial thyroid
> that measured TSH levels and secreted thyroxine accordingly could
> replace the thyroid gland even though it was nothing like the original
> organ in structure.

But you have no idea what triggers the action potentials in the first
place other than other action potentials. This makes us completely
incapable of any kind of awareness of the outside world. You are
mistaking the steering wheel for the driver.

>
> >>That's all that has to be
> >> simulated. A neuron doesn't have one response for when, say, the
> >> central bank raises interest rates and another response for when it
> >> lowers interest rates; all it does is respond to what its neighbours
> >> in the network are doing, and because of the complexity of the
> >> network, a small change in input can cause a large change in overall
> >> brain behaviour.
>
> > So if I move my arm, that's because the neurons that have nothing to
> > do with my arm must have caused the ones that do relate to my arm to
> > fire? And 'I' think that I move 'my arm' because why exactly?
>
> The neurons are connected in a network. If I see something relating to
> the economy that may lead me to move my arm to make an online bank
> account transaction.

What is 'I' and how does it physically create action potentials? The
whole time you are telling me that only neurons can trigger other
neurons, and now you want to invoke 'I'? Does I follow the laws of
physics or is it magic? Which is it? Does 'I' do anything that cannot
be explained by action potentials and cerebrospinal fluid? I expect
I'm going to hear some metaphysical invocations of 'information' in
the network.

> Obviously there has to be some causal connection
> between my arm and the information about the economy. How do you
> imagine that it happens?

It happens because you make sense of the what you read about the
economy and that sense motivates you to instantiate your own arm
muscles to move your arm. The experience making sense of the economic
news, as you said, *may* lead 'you' to move your arm - not *will
cause* your arm to move, or your neurons to secrete acetylcholine by
itself. It's a voluntary, high level, top-down participation through
which you control your body and your life.

>
> > If the brain of even a flea were anywhere remotely close to the
> > simplistic goofiness that you describe, we should have figured out
> > human consciousness completely 200 years ago.
>
> Even the brain of a flea is very complex. The brain of the nematode C
> elegans is the simplest brain we know, and although we have the
> anatomy of its neurons and their connections, no adequate computer
> simulation exists because we do not know the strength of the
> connections.

Why is the strength of the connections so hard to figure out?

>
> >> In theory we can simulate something perfectly if its behaviour is
> >> computable, in practice we can't but we try to simulate it
> >> sufficiently accurately. The brain has a level of engineering
> >> tolerance, or you would experience radical mental state changes every
> >> time you shook your head. So the simulation doesn't have to get it
> >> exactly right down to the quantum level.
>
> > Why would you experience a 'radical' mental state change? Why not just
> > an appropriate mental state change? Likewise your simulation will
> > experience an appropriate mental state to what is being used
> > materially to simulate it.
>
> There is a certain level of tolerance in every physical object we
> might want to simulate. We need to know a lot about it, but we don't
> need accuracy down to the position of every atom, for if the brain
> were so delicately balanced it would malfunction with the slightest
> perturbation.

A few micrograms of LSD or ricin can change a person's entire life or
end it.

>
> >> My point was that even a simulation of a very simple nervous system
> >> produces such a fantastic degree of complexity that it is impossible
> >> to know what it will do until you actually run the program. It is,
> >> like the weather, unpredictable and surprising even though it is
> >> deterministic.
>
> > There is still no link between predictability and intentionality. You
> > might be able to predict what I'm going to order from a menu at a
> > restaurant, but that doesn't mean that I'm not choosing it. You might
> > not be able to predict a tsunami, but that doesn't mean it's because
> > the tsunami is choosing to do something. The difference, I think, has
> > to do with more experiential depth in between each input and output.
>
> Whether something is conscious or not has nothing to do with whether
> it is deterministic or predictable.

What makes you think that's true? Do you have a counterfactual?

>
> >> > I understand perfectly why you think this argument works, but you
> >> > seemingly don't understand that my explanations and examples refute
> >> > your false dichotomy. Just as a rule of thumb, anytime someone says
> >> > something like "The only way out of this (meaning their) conclusion "
> >> > My assessment is that their mind is frozen in a defensive state and
> >> > cannot accept new information.
>
> >> You have agreed (sort of) that partial zombies are absurd
>
> > No. Stuffed animals are partial zombies to young children. It's a
> > linguistic failure to describe reality truthfully, not an insight into
> > the truth of consciousness.
>
> This statement shows that you haven't understood what a partial zombie
> is. It is a conscious being which lacks consciousness in a particular
> modality, such as visual perception or language processing, but does
> not notice that anything is abnormal and presents no external evidence
> that anything is abnormal. You have said a few posts back that you
> think this is absurd: when you're conscious, you know you're
> conscious.

I can only use examples where the partial zombie is on the outside
rather than the inside, since there is no way to have an example like
that (you either can't tell if someone else is a zombie or you can't
tell anything if you yourself are a partial zombie). I understand
exactly what you are saying, I'm just illustrating that if you turn it
around so that we can see the zombie side out but assume a non-zombie
side inside, it's the same thing, and that it's no big deal.

>
> >>and you have
> >> agreed (sort of) that the brain does not do things contrary to
> >> physics. But if consciousness is substrate-dependent, it would allow
> >> for the creation of partial zombies. This is a logical problem. You
> >> have not explained how to avoid it.
>
> > Consciousness is not substrate-dependent, it is substrate descriptive.
> > A partial zombie is just a misunderstanding of prognosia. A character
> > in a computer game is a partial zombie.
>
> A character in a computer game is not a partial zombie as defined
> above. And what's prognosia?

Prognosia is a word I made up, inspired by agnosia, but I've been
using it a lot here. I mean it to refer to projecting our own
subjectivity onto an inanimate object or other unconscious process.
It's related to the concept of Hyperactive Agency Detection Device.

> Do you mean agnosia, the inability to
> recognise certain types of objects? That is not a partial zombie
> either, since it affects behaviour and the patient is often aware of
> the deficit.

Explained above. I can only use examples which reverse the partial
zombie observation dynamic, but it really makes no difference. A
partial zombie has a missing channel of experience with no exterior
sign of deficit, while something like a ventriloquist dummy or stuffed
animal has exterior signs of augmented agency but has no corresponding
interior experience. It's the same thing, just algebraically reversed.

>
> >> Would it count as "internal motives" if the circuit were partly
> >> controlled by thermal noise, which in most circuits we try to
> >> eliminate? If the circuit were partly controlled by noise it would
> >> behave unpredictably (although it would still be following physical
> >> laws which could be described probabilistically). A free will
> >> incompatibilist could then say that the circuit acted of its own free
> >> will. I'm not sure that would satisfy you, but then I don't know what
> >> else "internal motives" could mean.
>
> > These are the kinds of things that can only be determined through
> > experiments. Adding thermal noise could be a first step toward an
> > organic-level molecular awareness. If it begins to assemble into
> > something like a cell, then you know you have a winner.
>
> What is special about a cell? Is it that it replicates?

You tell me. Why do we care if we find a cell on Mars? It's because
it's what we're made of and what our lives are made of. We care about
life because we are alive. Cells are life. Without that first hand
experience, if all we had to go on were computationalist theories,
then we should make no particular distinction between crushing human
heads and cracking coconuts, or between a neuron and a rusty nail.

> I don't see
> that as having any bearing on intelligence or consciousness. Viruses
> replicate and I would say many computer programs are already more
> intelligent than viruses.

If a person could be conscious without having cells then I would agree
with you. Replication is part of what life does, but life is more than
replication, it is replication of feeling. Computer programs don't
feel.

>
> >> The outcome of the superbowl creates visual and aural input, to which
> >> the relevant neurons respond using the same limited repertoire they
> >> use in response to every input.
>
> > There is no disembodied 'visual and aural input' to which neurons
> > respond. Our experience of sound and vision *are* the responses of our
> > neurons to their own perceptual niche - cochlear vibration summarized
> > through auditory nerve and retinal cellular changes summarized through
> > the optic nerve are themselves summarized by the sensory regions of
> > the brain.
>
> > The outcome of the superbowl creates nothing but meaningless dots on a
> > lighted screen. Neurons do all the rest. If you call that a limited
> > repertoire, in spite of the fact that every experience of every living
> > being is ecapsulated entirely with it, then I wonder what could be
> > less limited?
>
> The individual neurons have a very limited repertoire of behaviour,
> but the brain's behaviour and experiences are very rich. The richness
> comes not from the individual neurons, which are not much different to
> any other cell in the body, but from the complexity of the network. If
> you devalue the significance of this then why do we need the network?
> Why do we need a brain at all - why don't we just have a single neuron
> doing all the thinking?

Why do we need a brain at all, why not just use the cells of the body
to host the complex network? You need both. A complex network of ping
pong balls is useless, and a single complex cell is too fragile and
limited. You need a complex network of complex awareness to get
something like human consciousness. You can get nematode level
consciousness from a much simpler rig.

>
> >> Intelligent organisms did as a matter of fact evolve. If they could
> >> have evolved without being conscious (as you seem to believe of
> >> computers) then why didn't they?
>
> > Because the universe is not all about evolution. We perceive some
> > phenomena in our universe to be more intelligent than others, as a
> > function of what we are. Some phenomena have 'evolved' without much
> > consciousness (in our view) - minerals, clouds of gas and vapor, etc.
>
> The question is, why did humans evolve with consciousness rather than
> as philosophical zombies? The answer is, because it isn't possible to
> make a philosophical zombie since anything that behaves like a human
> must be conscious as a side-effect.

I understand that you are able to take that argument seriously, but it
just jaw dropping to me that anyone could. Why does fire exist?
Because it isn't possible to burn anything without starting a fire
because anything that behaves like it's on fire must be burning as a
side effect. It's just the most nakedly fallacious non-explanation I
can imagine. It has zero explanatory power, and besides that, it's
completely untrue. An actor's presence in a movie behaves like a human
but the image on the screen is not 'conscious as a side-effect'. They
are not even a little bit more conscious than a picture of a circle.
Just, ugh.

>
> >> No, the movie would repeat, since the digits of pi will repeat.
>
> > The frames of the movie would not repeat unless you limit the sequence
> > of frames to an arbitrary time.
>
> Yes, a movie of arbitrary length will repeat. But consciousness for
> the subject is not like a movie, it is more like a frame in a movie. I
> am not right now experiencing my whole life, I am experiencing the
> thoughts and perceptions of the moment.

Not at all. Your ability to make sense of the thoughts and perceptions
of the moment are entirely predicated on the conditioning and
experiences of your life thus far. If you had no access to that, you
would be worse off even than an infant (did you know French and German
babies come out of the womb with their respective dialects intoned in
their crying?). You would be an embryo - unable to read or understand
language, to make sense of images or sound, to control your body. Your
naive perception is just the thin layer of Δt on the tip of the
iceberg of not only your accumulated sense and motives, but those of
your family, friends, culture, species, planet, etc.

> The rest of my life is only
> available to me should I choose to recall a particular memory. Thus
> the thoughts I am able to have are limited by my working memory - my
> RAM, to use a computer analogy.

Even each memory is braided out of the fabric of your entire life.
It's not like a computer where you load the OS and it runs in RAM,
each line of code recapitulates the entire OS from a specific, and
dynamically changing perspective. At any time you can draw upon not
only your limited recall, but the cumulatively entangled wisdom of
your entire lifetime - an accumulation which is still growing and
changing; not merely erasing and rewriting like a Turing machine, but
partially erasing, amplifying, distorting, and creating vast fugues of
ambiguous superposition and potential scenarios extending into the
past, future, and fantasy.

>
> >> After
> >> a 10 digit sequence there will be at least 1 digit repeating, after a
> >> 100 digit sequence there will be at least a digit pair repeating,
> >> after a 1000 digit sequence there will be at least a triplet of digits
> >> repeating and so on. If you consider one minute sequences of a movie
> >> you can calculate how long you would have to wait before a sequence
> >> that you had seen before had to appear.
>
> > The movie lasts forever though. I don't care if digits or pairs
> > repeat, just as a poker player doesn't stop playing poker after he's
> > seen what all the cards look like, or seen every winning hand there
> > is.
>
> Yes, but the point was that a brain of finite size can only have a
> finite number of distinct thoughts.

A finite number at one time, but thoughts do not occur in a single
instant. If you have infinite time, you have infinite thoughts. If you
have finite brain size, you maybe have finite kinds of thoughts, but
really, even that is moot because of how thought is recapitulated. You
can forget your entire childhood but retain the language skills which
you learned in it.

>
> >> If the Internet is implemented on a finite computer network then there
> >> is only a finite amount of information that the network can handle.
>
> > Only at one time. Giving an infinite amount of time, there is no limit
> > to the amount of 'information' that it can handle.
>
> As I explained:
>
> >> For simplicity, say the Internet network consists of three logic
> >> elements. Then the entire Internet could only consist of the
> >> information 000, 001, 010, 100, 110, 101, 011 and 111.
> >> Another way to
> >> look at it is the maximum amount of information that can be packed
> >> into a certain volume of space, since you can make computers and
> >> brains more efficient by increasing the circuit or neuronal density
> >> rather than increasing the size. The upper limit for this is set by
> >> the Bekenstein bound (http://en.wikipedia.org/wiki/Bekenstein_bound).
> >> Using this you can calculate that the maximum number of distinct
> >> physical states the human brain can be in is about 10^10^42; a *huge*
> >> number but still a finite number.
>
> > The Bekenstein bound assumes only entropy, and not negentropy or
> > significance. Conscious entities export significance, so that every
> > atom in the cosmos is potentially an extensible part of the human
> > psyche. Books. Libraries. DVDs. Microfiche. Nanotech.
>
> Negetropy has a technical definition as the difference between the
> entropy of a system and the maximum possible entropy.

In order to have any phenomena which is not at it's maximum possible
entropy, you would need to have a principle which creates and
accumulates order. That is what I mean by negentropy.

> It has no
> bearing on the Bekenstein bound, which is the absolute maximum
> information that a volume can contain. It is a hard physical limit,
> not disputed by any physicist as far as I am aware. Anyway, it's
> pretty greedy of you to be dissatisfied with a number like 10^10^42,
> which if it could be utilised would allow one brain to have far more
> thoughts than all the humans who have ever lived put together.

Yeah, I don't know why you are making such a fuss over the difference
between an astronomically huge number of possible states at any one
time and a truly infinite number of states, but my point is that not
only is the brain trillions of times more complex than simple binary
grid like a TV screen, but it's constantly growing and changing,
condensing and compressing experiences. Also it's awarenesses cannot
be limited to a single period of time. It takes years to experience
one complete 'childhood'.

>
> >> That's right, we need only consider a substance that can successfully
> >> substitute for the limited range of functions we are interested in,
> >> whether it be cellular communication or cleaning windows.
>
> > Which is why, since we have no idea what the ranges of functions or
> > dependencies are contained in the human psyche, we cannot assume that
> > watering the plants with any old clear liquid should suffice.
>
> We need to know what the functions are before we can substitute for them.

Exactly. We don't know them yet, and we don't know how to know them.

>
> >> But TV programs can be shown on a TV with an LCD or CRT screen. The
> >> technologies are entirely different, even the end result looks
> >> slightly different, but for the purposes of watching and enjoying TV
> >> shows they are functionally identical.
>
> > Ugh. Seriously? You are going to say it in a universe of only black
> > and white versus color TVs, it's no big deal if it's in color or not?
> > It's like saying that the difference between a loaded pistol blowing
> > your brains out and a toy water gun are that one is a bit noisier and
> > messier than the other. I made my point, you are grasping for straws.
>
> The function of a black and white TV is different from that of a
> colour TV. However, the function of a CRT TV is similar to that of an
> LCD TV (both colour) even though the technology is completely
> different.

The technology is completely different but both are designed
specifically to make the same sense out of the same signals, which are
designed also by us, specifically to be decoded. You're still ignoring
my observation that just making a TV doesn't mean that the color will
show up 'because it must be a side effect' of TVs.


>
> >> Differences such as the weight
> >> or volume of the TV exist but are irrelevant when we are discussing
> >> watching the picture on the screen, even though weight and volume
> >> contribute to functional differences not related to picture quality.
> >> Yes, no doubt it would be difficult to go substituting cellular
> >> components, but as I have said many times that makes no difference to
> >> the functionalist argument, which is that *if* a way could be found to
> >> preserve function in a different substrate it would also preserve
> >> consciousness.
>
> > Of course, the functionalist argument agrees with itself. If there is
> > a way to do the impossible, then it is possible.
>
> It's not impossible, there is a qualitative difference between
> difficult and impossible. It would be difficult for humans to build a
> planet the size of Jupiter, but there is no theoretical reason why it
> could not be done. On the other hand, it is impossible to build a
> square triangle, since it presents a logical contradiction. There is
> no logical contradiction in substituting the function of parts of the
> human body. Substituting one thing for another to maintain function is
> one of the main tasks to which human intelligence is applied.

I understand what you are saying, and I would agree with you if the
contents of the psyche were not so utterly different from the physical
characteristics of the brain. We have no precedent for engineering
such a thing. It dwarfs the idea of building Jupiter. If you say we
can substitute lead for gold, I would say, well, sure, if you blast it
down to protons and reassemble it atom by atom - or find an easier way
to do it with a particle accelerator. But we have no common
denominator of human consciousness to work from. A few micrograms off
here or chromosomes off there, and you get major changes. I'm much
more optimistic about replicating tissue, and augmenting the nervous
system, but actually replacing it and expecting 'you' to still be in
there is a completely different proposition.

>
> >> That's right, since the visual cortex does not develop properly unless
> >> it gets the appropriate stimulation. But there's no reason to believe
> >> that stimulation via a retina would be better than stimulation from an
> >> artificial sensor. The cortical neurons don't connect directly to the
> >> rods and cones but via ganglion cells which in turn interface with
> >> neurons in the thalamus and midbrain. Moreover, the cortical neurons
> >> don't directly know anything about the light hitting the retina: the
> >> brain deduces the existence of an object forming an image because
> >> there is a mapping from the retina to the visual cortex, but it would
> >> deduce the same thing if the cortex were stimulated directly in the
> >> same way.
>
> > No, it looks like it doesn't work that way:
> >http://www.mendeley.com/research/tms-of-the-occipital-cortex-induces-...
>
> That is consistent with what I said.

No, it's the opposite. "some of the blind subjects reported tactile
sensations in the fingers that were somatotopically organized onto the
visual cortex", meaning that blind subjects who have their visual
cortex don't suddenly start seeing images and colors - they just feel
it in their fingertips. The don't see light, they feel in braille.

>
> >> It is irrelevant to the discussion whether the feeling of free will is
> >> observable from the outside. I don't understand why you say that such
> >> a feeling would have "no possible reason to exist or method of arising
> >> in a deterministic world". People are deluded about all sorts of
> >> things: what reason for existing or method of arising do those
> >> delusions have that a non-deterministic free will delusion would lack?
>
> > Because free will in a deterministic universe would not even be
> > conceivable in the first place to have a delusion about it. Even
> > delusional minds can't imagine a square circle or a new primary color.
>
> You're saying that free will in a deterministic world is
> contradictory. That may be the case if you define free will in a
> particular way (and not everyone defines it that way), but still that
> does not imply that the *feeling* of free will is incompatible with
> determinism.

I think that it is, because determinism assumes that everything that
happens happens for a particular reason. What would the reason for
such a feeling to exist, and how would it come into existence? Why
would determinism care if something pretends that it is not
determined, and how could it even ontologically conceived of non-
determined?

>
> >> This is your guess, but if everything has qualia then perhaps a
> >> computer running a program could have similar, if not exactly the
> >> same, qualia to those of a human.
>
> > Sure, and perhaps a trash can that says THANK YOU on it is sincerely
> > expressing it's gratitude. With enough ecstasy, it very well might
> > seem like it does. Why would that indicate anything about the native
> > qualia of the trash can?
>
> That's not an argument.

That's not a rebuttal.

> There is no logical or empirical reason to
> assume that the qualia of a computer that behaves like you cannot be
> very similar to your own. Even if you believe qualia are
> substrate-dependent, completely different materials can have the same
> physical properties, so why not the same qualia?

It's possible, I just don't think it's likely. It's possible that you
could make a carrot out of aluminum, but I don't think that's how you
make a carrot.

Craig

Craig Weinberg

unread,
Sep 26, 2011, 10:03:53 PM9/26/11
to Everything List
On Sep 25, 7:39 pm, Stathis Papaioannou <stath...@gmail.com> wrote:

> >> But if thoughts influence behaviour and thoughts are not observed,
> >> then observation of a brain would show things happening contrary to
> >> physical laws,

This image illustrates how bottom-up and top-down processing co-exist:
http://24.media.tumblr.com/tumblr_ls5o3ngv0f1qa4itpo1_500.jpg

If you only look at the leaves and horses, nothing unusual is going
on. It is not physics that makes consciousness invisible, it is our
desire to use physics to insist that reality fit into our narrowest
expectations.

Craig

Stathis Papaioannou

unread,
Sep 27, 2011, 9:20:16 AM9/27/11
to everyth...@googlegroups.com
On Tue, Sep 27, 2011 at 7:01 AM, Craig Weinberg <whats...@gmail.com> wrote:

>> OK, so you agree that the *observable* behaviour of neurons can be
>> adequately explained in terms of a chain of physical events. The
>> neurons won't do anything that is apparently magical, right?
>
> Are not all of our observations observable behaviors of neurons?
> You're not understanding how I think observation works. There is no
> such thing as an observable behavior, it's always a matter of
> observable how, and by who? If you limit your observation of how
> neurons behave to what can be detected by a series of metal probes or
> microscopic antenna, then you are getting a radically limited view of
> what neurons are and what they do. You are asking a blind man what the
> Mona Lisa looks like by having him touch the paint, then making a
> careful impression of his fingers, and then announcing that the Mona
> Lisa can only do what fingerpainting can do, and that inferring
> anything beyond the nature of plain old paint to the Mona Lisa is
> magical. No. It doesn't work that way. A universe where nothing more
> than paint exists has no capacity to describe an intentional, higher
> level representation through a medium of paint. The dynamics of paint
> alone do not describe their important but largely irrelevant role to
> creating the image.

Observable behaviours of neurons include things such as ion gates
opening, neurotransmitter release at the synapse and action potential
propagation down the axon. I know there may also be non-observables,
but I'm only asking about the observables. Do you agree that if a
non-observable causes a change in an observable, that would be like
magic from the point of view of a scientist?

>> > We know that for example, gambling affects the physical behavior of
>> > the amygdala. What physical force do you posit that emanates from
>> > 'gambling' that penetrates the skull and blood brain barrier to
>> > mobilize those neurons?
>>
>> The skull has various holes in it (the foramen magnum, the orbits,
>> foramina for the cranial nerves) through which sense data from the
>> environment enters and, via a series of neural relays, reaches the
>> amygdala and other parts of the brain.
>
> What is 'sense data' made of and how does it get into 'gambling'?

Sense data could be the sight and sound of a poker machine, which gets
into the brain, is processed in a complex way, and is understood to be
"gambling".

> Not at all. The amygdala's response to gambling cannot be observed on
> an MRI. We can only infer such a cause because we a priori understand
> the experience of gambling. If we did not, of course we could not
> infer any kind of association with neural patterns of firing with
> something like 'winning a big pot in video poker'. That brain activity
> is not a chain reaction from some other part of the brain. The brain
> is actually responding to the sense that the mind is making of the
> outside world and how it relates to the self. It is not going to be
> predictable from whatever the amygala happens to be doing five seconds
> or five hours before the win.

The amygdala's response is visible on a fMRI, which is how we know
about it. We can infer this without knowing anything about either
gambling or the brain, noticing that input A (the poker machine) is
consistently followed by output B (the amygdala lighting up on fMRI).

>> You have not answered it. You have contradicted yourself by saying we
>> *don't* observe the brain doing things contrary to physics and we *do*
>> observe the brain doing things contrary to physics.
>
> We don't observe the Mona Lisa doing things contrary to the properties
> of paint, but we do observe the Mona Lisa as a higher order experience
> manifested through paint. It's the same thing. Physics doesn't explain
> the psyche, but psyche uses the physical brain in the ordinary
> physical ways that the brain can be used.

But the Mona Lisa does not move of its own accord. That is what it
would have to do for the situation to be analogous to brain changes
occurring due to mental processes and not physical processes.

>>You seem to
>> believe that neurons in the amygdala will fire spontaneously when the
>> subject thinks about gambling, which would be magic.
>
> You don't understand that you are arguing against neuroscience and
> common sense. Of course you can manually control your electrochemical
> circuits with thought. That's what all thinking is. It's not that the
> amygdala fires spontaneously, it's that the thrills and chills of
> risktaking *are* the firing of the amygdala. You seem to be saying
> that the brain has our entire life planned out for us in advance as
> some kind of meaningless encephalographic housekeeping exercise where
> we have no ability to make ourselves horny by thinking about sex or
> hungry by thinking about food, no capacity to do or say things based
> upon the realities outside of our skull rather than the inside.

I'm not sure if you're not understanding or just pretending not to
understand. Take any neuron in the brain: it fires due to the
influences of the surrounding neurons, and each of those neurons fires
due to the influence of the neurons surrounding it, and so on,
accounting for all the neurons in the brain. These are the third
person observable effects; associated with (or identical to, or
another aspect of, or supervening on, or a side-effect of - it doesn't
change the argument) this observable activity are the thoughts and
feelings. A scientist cannot see the thoughts and feelings, since they
are non-observable. The non-observable thoughts and feelings cannot
affect the observable physical activity, for if they could, the
scientist would see apparently magical events. We can still say that
thought A leads to feeling B, but what the scientist observes is that
brain state A' (associated with thought A) leads to brain state B'
(associated with feeling B). So although we can tell the story of the
person in terms of thoughts and feelings, the scientist can tell the
same story in terms of biochemical events. If the scientist
understands the biochemistry then in theory he will be able to predict
everything the person will do (or write probabilistic equations if
truly random effects are significant in the brain), although in
practice due to the complexity of the system this would be very
difficult.

>>Neurons only fire
>> in response to a physical stimulus.
>
> Absurd. Is there a physical difference between a letter written in
> Chinese and one written in English...some sort of magic neurochemical
> that wafts off of the Chinese ink that prevents my cortex from parsing
> the characters?

Of course there is! The Chinese characters reflect light in a
different pattern, which stimulates the retina differently, which
sends different signals to the visual cortex, which sends different
signals to the language centres. If knowledge of Chinese has been
stored in the language centre the subject understands it, otherwise he
does not.

>> That the physical stimulus has
>> associated qualia is not observable:
>> a scientist would see the neuron
>> firing, explain why it fired in physical terms, and then wonder as an
>> afterthought if the neuron "felt" anything while it was firing.
>
> Which is why that approach is doomed to failure. There is no point to
> the brain other than to help process qualia. Very little of the brain
> is required for a body to survive. Insects have brains, and they
> survive quite well.

That the scientist can't see the qualia is not his fault. As a
practical matter, knowledge of the mechanics of the brain can help in
restoring normal function when things go wrong, even without
understanding the qualia.

>> >> A neuron has a limited number of duties: to fire if it sees a certain
>> >> potential difference across its cell membrane or a certain
>> >> concentration of neurotransmitter.
>>
>> > That is a gross reductionist mispresentation of neurology. You are
>> > giving the brain less functionality than mold. Tell me, how does this
>> > conversation turn into cell membrane potentials or neurotransmitters?
>>
>> Clearly, it does, since this conversation occurs when the neurons in
>> our brains are active.
>
> My God. You are unbelievable. I give you a straightforward, unarguably
> obvious example of a phenomenon which obviously has absolutely nothing
> to do with cellular biology but is nonetheless controlling the
> behavior of neurological cells, and you answer that that it must be
> biological anyways. Your position, literally, is that 'I can't be
> wrong, because I already know that I am right.'

Particular brain activity is necessary and sufficient for this
conversation to occur. It is necessary because without this brain
activity, no conversation. It is sufficient because if this brain
activity occurs, the conversation occurs. These are mainstream
scientific beliefs which are not disputed, like the fact that the
heart pumps blood.

>>The important functionality of the neurons is
>> the action potential, since that triggers other neurons and ultimately
>> muscle. The complex cellular apparatus in the neuron is there to allow
>> this process to happen, as the complex cellular apparatus in the
>> thyroid is to enable secretion of thyroxine. An artificial thyroid
>> that measured TSH levels and secreted thyroxine accordingly could
>> replace the thyroid gland even though it was nothing like the original
>> organ in structure.
>
> But you have no idea what triggers the action potentials in the first
> place other than other action potentials. This makes us completely
> incapable of any kind of awareness of the outside world. You are
> mistaking the steering wheel for the driver.

The outside world gets in via the sense organs, which trigger action
potentials in nerves, which then trigger a series of action potentials
in the brain.

>> > So if I move my arm, that's because the neurons that have nothing to
>> > do with my arm must have caused the ones that do relate to my arm to
>> > fire? And 'I' think that I move 'my arm' because why exactly?
>>
>> The neurons are connected in a network. If I see something relating to
>> the economy that may lead me to move my arm to make an online bank
>> account transaction.
>
> What is 'I' and how does it physically create action potentials? The
> whole time you are telling me that only neurons can trigger other
> neurons, and now you want to invoke 'I'? Does I follow the laws of
> physics or is it magic? Which is it? Does 'I' do anything that cannot
> be explained by action potentials and cerebrospinal fluid? I expect
> I'm going to hear some metaphysical invocations of 'information' in
> the network.

"I" am the ensemble of neurons in the brain which when they are
functioning properly give rise to consciousness and a sense of
identity. "I" never do anything that can't be explained in terms of a
chain of neuronal events.

>> Obviously there has to be some causal connection
>> between my arm and the information about the economy. How do you
>> imagine that it happens?
>
> It happens because you make sense of the what you read about the
> economy and that sense motivates you to instantiate your own arm
> muscles to move your arm. The experience making sense of the economic
> news, as you said, *may* lead 'you' to move your arm - not *will
> cause* your arm to move, or your neurons to secrete acetylcholine by
> itself. It's a voluntary, high level, top-down participation through
> which you control your body and your life.

The making sense of what you read occurs due to certain neuronal
activity in the language centre of your brain. This may or may not
cause you to take a certain action, just as a coin may come up heads
or tails.

>> > If the brain of even a flea were anywhere remotely close to the
>> > simplistic goofiness that you describe, we should have figured out
>> > human consciousness completely 200 years ago.
>>
>> Even the brain of a flea is very complex. The brain of the nematode C
>> elegans is the simplest brain we know, and although we have the
>> anatomy of its neurons and their connections, no adequate computer
>> simulation exists because we do not know the strength of the
>> connections.
>
> Why is the strength of the connections so hard to figure out?

Because scientific research is difficult.

>> There is a certain level of tolerance in every physical object we
>> might want to simulate. We need to know a lot about it, but we don't
>> need accuracy down to the position of every atom, for if the brain
>> were so delicately balanced it would malfunction with the slightest
>> perturbation.
>
> A few micrograms of LSD or ricin can change a person's entire life or
> end it.

Yes, there are crucial parts of the system which don't tolerate
disruption. It's the same with any machine.

>> Whether something is conscious or not has nothing to do with whether
>> it is deterministic or predictable.
>
> What makes you think that's true? Do you have a counterfactual?

There is no reason to believe that determinism affects consciousness.
In general it is impossible to distinguish random from pseudorandom.
If the brain utilised true random processes and part of it were
replaced with a component that used a pseudorandom number generator
with a similar probability function to the true random one we would
notice no change in behaviour and the subject would notice no change
in consciousness (for if he did there would be a change in behaviour).

>> This statement shows that you haven't understood what a partial zombie
>> is. It is a conscious being which lacks consciousness in a particular
>> modality, such as visual perception or language processing, but does
>> not notice that anything is abnormal and presents no external evidence
>> that anything is abnormal. You have said a few posts back that you
>> think this is absurd: when you're conscious, you know you're
>> conscious.
>
> I can only use examples where the partial zombie is on the outside
> rather than the inside, since there is no way to have an example like
> that (you either can't tell if someone else is a zombie or you can't
> tell anything if you yourself are a partial zombie). I understand
> exactly what you are saying, I'm just illustrating that if you turn it
> around so that we can see the zombie side out but assume a non-zombie
> side inside, it's the same thing, and that it's no big deal.

A partial zombie occurs if only part of your brain is zombified.
Because this part of the brain (by definition) has the same observable
third person behaviour as it did before it was zombified, you would
lack the qualia of the replaced part while not noticing or behaving
differently. It is this which is absurd. The only way out of the
absurdity is to say that it is impossible to make a brain component
with the same observable third person behaviour that didn't also have
the same qualia. (Sorry for the clumsiness of "observable third person
behaviour" - I should just say "behaviour" but I think in the past you
have taken this to include consciousness).

>> The question is, why did humans evolve with consciousness rather than
>> as philosophical zombies? The answer is, because it isn't possible to
>> make a philosophical zombie since anything that behaves like a human
>> must be conscious as a side-effect.
>
> I understand that you are able to take that argument seriously, but it
> just jaw dropping to me that anyone could. Why does fire exist?
> Because it isn't possible to burn anything without starting a fire
> because anything that behaves like it's on fire must be burning as a
> side effect. It's just the most nakedly fallacious non-explanation I
> can imagine. It has zero explanatory power, and besides that, it's
> completely untrue. An actor's presence in a movie behaves like a human
> but the image on the screen is not 'conscious as a side-effect'. They
> are not even a little bit more conscious than a picture of a circle.
> Just, ugh.

Consciousness is a rather elaborate thing to evolve and elaborate
things like that don't evolve unless they strongly enhance survival
and reproductive success. If philosophical zombies were possible, they
would have the same survival and reproductive success as non-zombies.
But philosophical zombies did not evolve, suggesting to me that
consciousness is a necessary side-effect of any intelligent being.

>> It's not impossible, there is a qualitative difference between
>> difficult and impossible. It would be difficult for humans to build a
>> planet the size of Jupiter, but there is no theoretical reason why it
>> could not be done. On the other hand, it is impossible to build a
>> square triangle, since it presents a logical contradiction. There is
>> no logical contradiction in substituting the function of parts of the
>> human body. Substituting one thing for another to maintain function is
>> one of the main tasks to which human intelligence is applied.
>
> I understand what you are saying, and I would agree with you if the
> contents of the psyche were not so utterly different from the physical
> characteristics of the brain. We have no precedent for engineering
> such a thing. It dwarfs the idea of building Jupiter. If you say we
> can substitute lead for gold, I would say, well, sure, if you blast it
> down to protons and reassemble it atom by atom - or find an easier way
> to do it with a particle accelerator. But we have no common
> denominator of human consciousness to work from. A few micrograms off
> here or chromosomes off there, and you get major changes. I'm much
> more optimistic about replicating tissue, and augmenting the nervous
> system, but actually replacing it and expecting 'you' to still be in
> there is a completely different proposition.

We have already started engineering brain replacement: cochlear
implants, artificial hippocampus. These are crude but it's early days
yet.

>> You're saying that free will in a deterministic world is
>> contradictory. That may be the case if you define free will in a
>> particular way (and not everyone defines it that way), but still that
>> does not imply that the *feeling* of free will is incompatible with
>> determinism.
>
> I think that it is, because determinism assumes that everything that
> happens happens for a particular reason. What would the reason for
> such a feeling to exist, and how would it come into existence? Why
> would determinism care if something pretends that it is not
> determined, and how could it even ontologically conceived of non-
> determined?

The feeling of free will is simply due to the fact that I don't know
what I'm going to do until I do it. This is the case for computer
programs as well: the program can't know what the outcome of the
computation is until it actually runs, otherwise running it would be a
waste of time.


--
Stathis Papaioannou

Craig Weinberg

unread,
Sep 27, 2011, 4:35:58 PM9/27/11
to Everything List
On Sep 27, 9:20 am, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Tue, Sep 27, 2011 at 7:01 AM, Craig Weinberg <whatsons...@gmail.com> wrote:
> >> OK, so you agree that the *observable* behaviour of neurons can be
> >> adequately explained in terms of a chain of physical events. The
> >> neurons won't do anything that is apparently magical, right?
>
> > Are not all of our observations observable behaviors of neurons?
> > You're not understanding how I think observation works. There is no
> > such thing as an observable behavior, it's always a matter of
> > observable how, and by who? If you limit your observation of how
> > neurons behave to what can be detected by a series of metal probes or
> > microscopic antenna, then you are getting a radically limited view of
> > what neurons are and what they do. You are asking a blind man what the
> > Mona Lisa looks like by having him touch the paint, then making a
> > careful impression of his fingers, and then announcing that the Mona
> > Lisa can only do what fingerpainting can do, and that inferring
> > anything beyond the nature of plain old paint to the Mona Lisa is
> > magical. No. It doesn't work that way. A universe where nothing more
> > than paint exists has no capacity to describe an intentional, higher
> > level representation through a medium of paint. The dynamics of paint
> > alone do not describe their important but largely irrelevant role to
> > creating the image.
>
> Observable behaviours of neurons include things such as ion gates
> opening, neurotransmitter release at the synapse and action potential
> propagation down the axon.

Those phenomena are observable using certain kinds of instruments. Our
native instruments are infinitely more authoritative in observing the
behaviors of neurons.

> I know there may also be non-observables,
> but I'm only asking about the observables.

You are asking about 3-p machine observables.

> Do you agree that if a
> non-observable causes a change in an observable, that would be like
> magic from the point of view of a scientist?

Not at all. We observe 3-p changes caused by 1-p intentionality
routinely. There is a study cited recently in that TV documentary
where the regions of vegetative patients brains associated with
coordinated movements light up an fMRI when being asked to imagine
playing tennis. http://web.me.com/adrian.owen/site/Publications_files/Owen-2006-FutureNeurology.pdf
p. 693-4

Why do you want me to think that the ordinary relationship between the
brain and the mind is magic? The 'non-observable cause' is the patient
voluntarily imagining playing tennis. There is no other cause. They
were given a choice between tennis and house, and the result of the
fMRI was determined by nothing other than the patient's subjective
choice. So will you stop accusing me of witchcraft about this now or
is there going to be some other way of making me seem like I am the
one rejecting science when it is your position which broadly
reimagines the brain as some kind of closed-circuit Rube Goldberg
apparatus?

>
> >> > We know that for example, gambling affects the physical behavior of
> >> > the amygdala. What physical force do you posit that emanates from
> >> > 'gambling' that penetrates the skull and blood brain barrier to
> >> > mobilize those neurons?
>
> >> The skull has various holes in it (the foramen magnum, the orbits,
> >> foramina for the cranial nerves) through which sense data from the
> >> environment enters and, via a series of neural relays, reaches the
> >> amygdala and other parts of the brain.
>
> > What is 'sense data' made of and how does it get into 'gambling'?
>
> Sense data could be the sight and sound of a poker machine, which gets
> into the brain, is processed in a complex way, and is understood to be
> "gambling".

By sight and sound do you mean acoustic waves and photons? Those
things don't physically 'get into the brain', do they? You won't find
'sights and sounds' in the bloodstream. If you include them in a model
of neurology, wouldn't you have to include the entire universe?

>
> > Not at all. The amygdala's response to gambling cannot be observed on
> > an MRI. We can only infer such a cause because we a priori understand
> > the experience of gambling. If we did not, of course we could not
> > infer any kind of association with neural patterns of firing with
> > something like 'winning a big pot in video poker'. That brain activity
> > is not a chain reaction from some other part of the brain. The brain
> > is actually responding to the sense that the mind is making of the
> > outside world and how it relates to the self. It is not going to be
> > predictable from whatever the amygala happens to be doing five seconds
> > or five hours before the win.
>
> The amygdala's response is visible on a fMRI, which is how we know
> about it. We can infer this without knowing anything about either
> gambling or the brain, noticing that input A (the poker machine) is
> consistently followed by output B (the amygdala lighting up on fMRI).

Input A does not have to be a poker machine. It can be a daydream of a
horse race and give the same fMRI output B. It is only though our
first hand experience of the feelings that these different activities
have in common - risk taking, fear of losing or being caught, etc,
that we have even the foggiest idea of what the amygdala might do.

>
> >> You have not answered it. You have contradicted yourself by saying we
> >> *don't* observe the brain doing things contrary to physics and we *do*
> >> observe the brain doing things contrary to physics.
>
> > We don't observe the Mona Lisa doing things contrary to the properties
> > of paint, but we do observe the Mona Lisa as a higher order experience
> > manifested through paint. It's the same thing. Physics doesn't explain
> > the psyche, but psyche uses the physical brain in the ordinary
> > physical ways that the brain can be used.
>
> But the Mona Lisa does not move of its own accord. That is what it
> would have to do for the situation to be analogous to brain changes
> occurring due to mental processes and not physical processes.

Ok, so use a TV show as an example. That moves. The imaging elements
are being moved remotely by a higher order process. The low level
processes are being instructed by high level processes. Any attempt to
limit our understanding of TV shows to the dynamics of an LCD display
or digital modem will be only mildly informative at best, but will
lead us down a completely misguided path if we decide a priori that TV
shows must be nothing but digital bitstream displays.

>
> >>You seem to
> >> believe that neurons in the amygdala will fire spontaneously when the
> >> subject thinks about gambling, which would be magic.
>
> > You don't understand that you are arguing against neuroscience and
> > common sense. Of course you can manually control your electrochemical
> > circuits with thought. That's what all thinking is. It's not that the
> > amygdala fires spontaneously, it's that the thrills and chills of
> > risktaking *are* the firing of the amygdala. You seem to be saying
> > that the brain has our entire life planned out for us in advance as
> > some kind of meaningless encephalographic housekeeping exercise where
> > we have no ability to make ourselves horny by thinking about sex or
> > hungry by thinking about food, no capacity to do or say things based
> > upon the realities outside of our skull rather than the inside.
>
> I'm not sure if you're not understanding or just pretending not to
> understand. Take any neuron in the brain: it fires due to the
> influences of the surrounding neurons,

Noooo. Millions of neurons fire simultaneously in separate regions of
the brain. Your assumptions about chain reactions being the only way
that neurons fire is not correct. You owe the brain an apology.
http://www.youtube.com/watch?v=VaQ66lDZ-08

Please note: "Coherent SPONTANEOUS activity"
http://jn.physiology.org/content/96/6/3517.full?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&author1=vincent&searchid=1&FIRSTINDEX=0&sortspec=relevance&resourcetype=HWCIT

>and each of those neurons fires
> due to the influence of the neurons surrounding it, and so on,
> accounting for all the neurons in the brain.

This is a fairy tale which I have not even heard anyone else claim
before.

>These are the third
> person observable effects; associated with (or identical to, or
> another aspect of, or supervening on, or a side-effect of - it doesn't
> change the argument) this observable activity are the thoughts and
> feelings. A scientist cannot see the thoughts and feelings, since they
> are non-observable.

They are observable directly to the subject. A scientist can research
her behavior of her own brain if she wants to.

> The non-observable thoughts and feelings cannot
> affect the observable physical activity,

If they did not affect the observable physical world then I could not
type to you my thoughts right now. You position is utterly invalid if
genuine, and not even entertaining if trollery.

> for if they could, the
> scientist would see apparently magical events.

Like voluntary movement of body parts and speech?

>We can still say that
> thought A leads to feeling B, but what the scientist observes is that
> brain state A' (associated with thought A) leads to brain state B'
> (associated with feeling B). So although we can tell the story of the
> person in terms of thoughts and feelings, the scientist can tell the
> same story in terms of biochemical events. If the scientist
> understands the biochemistry then in theory he will be able to predict
> everything the person will do (or write probabilistic equations if
> truly random effects are significant in the brain), although in
> practice due to the complexity of the system this would be very
> difficult.

Some brain states do work that way, but some don't. Anyone that moves
their little finger is going to move it in a neurologically similar
way, but what people want to do for a career is not determinable in
the same way. It depends on where they are born, how they are raised,
what their opportunities are, etc. It's not something which can be
regressed from brain state Q to some kind of precursor brain state G.

>
> >>Neurons only fire
> >> in response to a physical stimulus.
>
> > Absurd. Is there a physical difference between a letter written in
> > Chinese and one written in English...some sort of magic neurochemical
> > that wafts off of the Chinese ink that prevents my cortex from parsing
> > the characters?
>
> Of course there is! The Chinese characters reflect light in a
> different pattern, which stimulates the retina differently, which
> sends different signals to the visual cortex, which sends different
> signals to the language centres. If knowledge of Chinese has been
> stored in the language centre the subject understands it, otherwise he
> does not.

A pattern is not a physical stimulus. By your reckoning, it is magic
because it is not subject to the laws of physics. If you are going to
allow patterns as physical, then just say that thoughts control neuron
behaviors by generating patterns. My conscious mind, as the interior
experience of several different regions of the brain decides, as a
whole, to write a note. From the 3-p it looks like signals
spontaneously emerges in the language centers, cognitive and emotional
centers, etc, where they spread and are responded to by other parts of
the brain, lots of back and forth to the efferent nerves in the spine
as I am composing, writing, editing, and completing the note. At all
times I am deciding the content of the note, but the I is just the
most dominant voice in a fugue of semi and subconscious influences -
some of them may obstruct my train of thought, or make a typo, etc,
but at no time is there something visible on an MRI which makes sense
in terms of the behavior of neurons alone which could lead an alien
biologist to conclude the existence of 'language' or 'note
writing' (unless he himself had first hand experience with those
phenomena).

>
> >> That the physical stimulus has
> >> associated qualia is not observable:
> >> a scientist would see the neuron
> >> firing, explain why it fired in physical terms, and then wonder as an
> >> afterthought if the neuron "felt" anything while it was firing.
>
> > Which is why that approach is doomed to failure. There is no point to
> > the brain other than to help process qualia. Very little of the brain
> > is required for a body to survive. Insects have brains, and they
> > survive quite well.
>
> That the scientist can't see the qualia is not his fault. As a
> practical matter, knowledge of the mechanics of the brain can help in
> restoring normal function when things go wrong, even without
> understanding the qualia.

The scientist can see qualia, just not with an instrument. We can all
see the qualia. It is his fault if he denies the relevance of the
qualia.

>
> >> >> A neuron has a limited number of duties: to fire if it sees a certain
> >> >> potential difference across its cell membrane or a certain
> >> >> concentration of neurotransmitter.
>
> >> > That is a gross reductionist mispresentation of neurology. You are
> >> > giving the brain less functionality than mold. Tell me, how does this
> >> > conversation turn into cell membrane potentials or neurotransmitters?
>
> >> Clearly, it does, since this conversation occurs when the neurons in
> >> our brains are active.
>
> > My God. You are unbelievable. I give you a straightforward, unarguably
> > obvious example of a phenomenon which obviously has absolutely nothing
> > to do with cellular biology but is nonetheless controlling the
> > behavior of neurological cells, and you answer that that it must be
> > biological anyways. Your position, literally, is that 'I can't be
> > wrong, because I already know that I am right.'
>
> Particular brain activity is necessary and sufficient for this
> conversation to occur. It is necessary because without this brain
> activity, no conversation. It is sufficient because if this brain
> activity occurs, the conversation occurs. These are mainstream
> scientific beliefs which are not disputed, like the fact that the
> heart pumps blood.

It's not sufficient, because our brains are not physically connected.
You cannot make this conversation happen just by having someone
stimulate your brain. You could maybe simulate your end of it, but
that wouldn't be an occurrence of this conversation.

>
> >>The important functionality of the neurons is
> >> the action potential, since that triggers other neurons and ultimately
> >> muscle. The complex cellular apparatus in the neuron is there to allow
> >> this process to happen, as the complex cellular apparatus in the
> >> thyroid is to enable secretion of thyroxine. An artificial thyroid
> >> that measured TSH levels and secreted thyroxine accordingly could
> >> replace the thyroid gland even though it was nothing like the original
> >> organ in structure.
>
> > But you have no idea what triggers the action potentials in the first
> > place other than other action potentials. This makes us completely
> > incapable of any kind of awareness of the outside world. You are
> > mistaking the steering wheel for the driver.
>
> The outside world gets in via the sense organs, which trigger action
> potentials in nerves, which then trigger a series of action potentials
> in the brain.

You're glossing over how the part where the outside world, composed of
meaningless physical enactments, becomes our inside world...which is
made of some completely unexplained presentation.

Besides, it goes the other way too. I trigger a series of action
potentials in my brain, which trigger action potentials in my efferent
nerves, which triggers the muscles in my fingers to trigger a keyboard
to type my thoughts into your mind.

>
> >> > So if I move my arm, that's because the neurons that have nothing to
> >> > do with my arm must have caused the ones that do relate to my arm to
> >> > fire? And 'I' think that I move 'my arm' because why exactly?
>
> >> The neurons are connected in a network. If I see something relating to
> >> the economy that may lead me to move my arm to make an online bank
> >> account transaction.
>
> > What is 'I' and how does it physically create action potentials? The
> > whole time you are telling me that only neurons can trigger other
> > neurons, and now you want to invoke 'I'? Does I follow the laws of
> > physics or is it magic? Which is it? Does 'I' do anything that cannot
> > be explained by action potentials and cerebrospinal fluid? I expect
> > I'm going to hear some metaphysical invocations of 'information' in
> > the network.
>
> "I" am the ensemble of neurons in the brain which when they are
> functioning properly give rise to consciousness and a sense of
> identity. "I" never do anything that can't be explained in terms of a
> chain of neuronal events.

What makes you think that 'giving rise to consciousness and a sense of
identity' can be explained in terms of a chain of neuronal events.
It's just because you assume a priori that is what consciousness is.

>
> >> Obviously there has to be some causal connection
> >> between my arm and the information about the economy. How do you
> >> imagine that it happens?
>
> > It happens because you make sense of the what you read about the
> > economy and that sense motivates you to instantiate your own arm
> > muscles to move your arm. The experience making sense of the economic
> > news, as you said, *may* lead 'you' to move your arm - not *will
> > cause* your arm to move, or your neurons to secrete acetylcholine by
> > itself. It's a voluntary, high level, top-down participation through
> > which you control your body and your life.
>
> The making sense of what you read occurs due to certain neuronal
> activity in the language centre of your brain. This may or may not
> cause you to take a certain action, just as a coin may come up heads
> or tails.

Why is the making sense necessary at all? Why wouldn't the neuronal
activity of reading just cause the neuronal activity of taking a
certain action?

>
> >> > If the brain of even a flea were anywhere remotely close to the
> >> > simplistic goofiness that you describe, we should have figured out
> >> > human consciousness completely 200 years ago.
>
> >> Even the brain of a flea is very complex. The brain of the nematode C
> >> elegans is the simplest brain we know, and although we have the
> >> anatomy of its neurons and their connections, no adequate computer
> >> simulation exists because we do not know the strength of the
> >> connections.
>
> > Why is the strength of the connections so hard to figure out?
>
> Because scientific research is difficult.

It's ok to say 'I don't know'.

>
> >> There is a certain level of tolerance in every physical object we
> >> might want to simulate. We need to know a lot about it, but we don't
> >> need accuracy down to the position of every atom, for if the brain
> >> were so delicately balanced it would malfunction with the slightest
> >> perturbation.
>
> > A few micrograms of LSD or ricin can change a person's entire life or
> > end it.
>
> Yes, there are crucial parts of the system which don't tolerate
> disruption. It's the same with any machine.

Are you assuming then that consciousness is not such a disruption
intolerant part of the system?

>
> >> Whether something is conscious or not has nothing to do with whether
> >> it is deterministic or predictable.
>
> > What makes you think that's true? Do you have a counterfactual?
>
> There is no reason to believe that determinism affects consciousness.
> In general it is impossible to distinguish random from pseudorandom.
> If the brain utilised true random processes and part of it were
> replaced with a component that used a pseudorandom number generator
> with a similar probability function to the true random one we would
> notice no change in behaviour and the subject would notice no change
> in consciousness (for if he did there would be a change in behaviour).

So the answer is no, you do not have a counterfactual, and that there
is nothing that makes you think that it's true other than it cannot be
proven to be false by non-subjective means. Considering that the whole
question is about subjectivity, to rule out subjective views may not
be a scientific way to approach it. To me, it's pretty clear that one
of the functions of consciousness is to make determinations, and
therefore presents another another ontological option besides pre-
determined, random, or pseudorandom. There is a such a thing as
intentionality, the fact that is cannot be understood through physics
and computation is not a compelling argument at all to me, it just
reveals the limitations of our current models of physics.

>
> >> This statement shows that you haven't understood what a partial zombie
> >> is. It is a conscious being which lacks consciousness in a particular
> >> modality, such as visual perception or language processing, but does
> >> not notice that anything is abnormal and presents no external evidence
> >> that anything is abnormal. You have said a few posts back that you
> >> think this is absurd: when you're conscious, you know you're
> >> conscious.
>
> > I can only use examples where the partial zombie is on the outside
> > rather than the inside, since there is no way to have an example like
> > that (you either can't tell if someone else is a zombie or you can't
> > tell anything if you yourself are a partial zombie). I understand
> > exactly what you are saying, I'm just illustrating that if you turn it
> > around so that we can see the zombie side out but assume a non-zombie
> > side inside, it's the same thing, and that it's no big deal.
>
> A partial zombie occurs if only part of your brain is zombified.
> Because this part of the brain (by definition) has the same observable
> third person behaviour as it did before it was zombified, you would
> lack the qualia of the replaced part while not noticing or behaving
> differently. It is this which is absurd.

That contradicts your view that the behavior of the mind must all be
physically observable in the brain. We know that qualia doesn't
physically exist in the brain, so that makes it a zombie already.

>The only way out of the
> absurdity is to say that it is impossible to make a brain component
> with the same observable third person behaviour that didn't also have
> the same qualia. (Sorry for the clumsiness of "observable third person
> behaviour" - I should just say "behaviour" but I think in the past you
> have taken this to include consciousness).

No, the way out of it is to see that qualia can be absent, distorted,
or replaced in the brain. Blind people learn Braille and use the same
area of the brain that sighted people use for vision, only for tactile
qualia. Synesthesia also shows that qualia are not fixed to
functionality, and conversion disorders illustrate absent qualia
without neurological deficit.

Even if none of those things were true, to say that this unexplainable
experiential dimension we live in must just 'come with' particular
mathematical objects because we can't imagine being able to make
something that acts like us but doesn't live in the same dimension has
all the earmarks of a terrible theory.
>
> >> The question is, why did humans evolve with consciousness rather than
> >> as philosophical zombies? The answer is, because it isn't possible to
> >> make a philosophical zombie since anything that behaves like a human
> >> must be conscious as a side-effect.
>
> > I understand that you are able to take that argument seriously, but it
> > just jaw dropping to me that anyone could. Why does fire exist?
> > Because it isn't possible to burn anything without starting a fire
> > because anything that behaves like it's on fire must be burning as a
> > side effect. It's just the most nakedly fallacious non-explanation I
> > can imagine. It has zero explanatory power, and besides that, it's
> > completely untrue. An actor's presence in a movie behaves like a human
> > but the image on the screen is not 'conscious as a side-effect'. They
> > are not even a little bit more conscious than a picture of a circle.
> > Just, ugh.
>
> Consciousness is a rather elaborate thing to evolve and elaborate
> things like that don't evolve unless they strongly enhance survival
> and reproductive success.

Only if you a priori presume that evolution for it's own sake is the
only possible phenomenology in all possible universe. You have it
backwards. You first assume that we know the machine is real, then you
conclude that if the ghost exists, it must be a necessary part of the
machine. Instead, if we see that the machine is what it looks like
when a ghost looks at another ghost, then the whole notion of
evolution can be put in proper perspective as the method the machine
uses to accomplish the purposes of the ghost. We have already
established that consciousness has no plausible evolutionary function
or mechanism of being generated. That is the fundamental reality.
Evolution arises as a particular balance between entropy and
significance - it is not an explanation for significance.

> If philosophical zombies were possible, they
> would have the same survival and reproductive success as non-zombies.
> But philosophical zombies did not evolve, suggesting to me that
> consciousness is a necessary side-effect of any intelligent being.

Philosophical zombies did evolve. They are called sociopaths.

>
> >> It's not impossible, there is a qualitative difference between
> >> difficult and impossible. It would be difficult for humans to build a
> >> planet the size of Jupiter, but there is no theoretical reason why it
> >> could not be done. On the other hand, it is impossible to build a
> >> square triangle, since it presents a logical contradiction. There is
> >> no logical contradiction in substituting the function of parts of the
> >> human body. Substituting one thing for another to maintain function is
> >> one of the main tasks to which human intelligence is applied.
>
> > I understand what you are saying, and I would agree with you if the
> > contents of the psyche were not so utterly different from the physical
> > characteristics of the brain. We have no precedent for engineering
> > such a thing. It dwarfs the idea of building Jupiter. If you say we
> > can substitute lead for gold, I would say, well, sure, if you blast it
> > down to protons and reassemble it atom by atom - or find an easier way
> > to do it with a particle accelerator. But we have no common
> > denominator of human consciousness to work from. A few micrograms off
> > here or chromosomes off there, and you get major changes. I'm much
> > more optimistic about replicating tissue, and augmenting the nervous
> > system, but actually replacing it and expecting 'you' to still be in
> > there is a completely different proposition.
>
> We have already started engineering brain replacement: cochlear
> implants, artificial hippocampus. These are crude but it's early days
> yet.

It wouldn't matter if they were perfect. Using artificial ear to hear
with is not the same as becoming a computer program. People used to
use a horn as a hearing aid. If I made a really fancy horn could I
replace your brain with it?

>
> >> You're saying that free will in a deterministic world is
> >> contradictory. That may be the case if you define free will in a
> >> particular way (and not everyone defines it that way), but still that
> >> does not imply that the *feeling* of free will is incompatible with
> >> determinism.
>
> > I think that it is, because determinism assumes that everything that
> > happens happens for a particular reason. What would the reason for
> > such a feeling to exist, and how would it come into existence? Why
> > would determinism care if something pretends that it is not
> > determined, and how could it even ontologically conceived of non-
> > determined?
>
> The feeling of free will is simply due to the fact that I don't know
> what I'm going to do until I do it.

Why would there be a feeling associated with that? What purpose would
it serve to know or not know that you don't know what you are going to
do if you can't control whether or not you do it?

>This is the case for computer
> programs as well: the program can't know what the outcome of the
> computation is until it actually runs, otherwise running it would be a
> waste of time.

Computation is a waste of time. Unless there is some non-computational
observer to give a crap.

Craig

Stathis Papaioannou

unread,
Sep 28, 2011, 9:43:20 AM9/28/11
to everyth...@googlegroups.com
On Wed, Sep 28, 2011 at 6:35 AM, Craig Weinberg <whats...@gmail.com> wrote:

>> Do you agree that if a
>> non-observable causes a change in an observable, that would be like
>> magic from the point of view of a scientist?
>
> Not at all. We observe 3-p changes caused by 1-p intentionality
> routinely. There is a study cited recently in that TV documentary
> where the regions of vegetative patients brains associated with
> coordinated movements light up an fMRI when being asked to imagine
> playing tennis. http://web.me.com/adrian.owen/site/Publications_files/Owen-2006-FutureNeurology.pdf
> p. 693-4

> Why do you want me to think that the ordinary relationship between the
> brain and the mind is magic? The 'non-observable cause' is the patient
> voluntarily imagining playing tennis. There is no other cause. They
> were given a choice between tennis and house, and the result of the
> fMRI was determined by nothing other than the patient's subjective
> choice. So will you stop accusing me of witchcraft about this now or
> is there going to be some other way of making me seem like I am the
> one rejecting science when it is your position which broadly
> reimagines the brain as some kind of closed-circuit Rube Goldberg
> apparatus?

The patient "voluntarily imagines playing tennis" if and only if
certain neural processes occur in the brain. If you believe thoughts
can arise in the absence of such neural processes or that thoughts by
themselves (i.e. not the associated neural process) can cause physical
changes in the brain such as neurons firing then you believe in
something like an immaterial soul which does our thinking for us. It's
not impossible that there is an immaterial soul but then the question
needs to be asked, why would we need a Rube Goldberg apparatus like a
brain at all when matter can be directly animated by spirit?

>> Sense data could be the sight and sound of a poker machine, which gets
>> into the brain, is processed in a complex way, and is understood to be
>> "gambling".
>
> By sight and sound do you mean acoustic waves and photons? Those
> things don't physically 'get into the brain', do they? You won't find
> 'sights and sounds' in the bloodstream. If you include them in a model
> of neurology, wouldn't you have to include the entire universe?

Light and sound are converted into electrical impulses that travel
down the optic and auditory nerves.

>> I'm not sure if you're not understanding or just pretending not to
>> understand. Take any neuron in the brain: it fires due to the
>> influences of the surrounding neurons,
>
> Noooo. Millions of neurons fire simultaneously in separate regions of
> the brain. Your assumptions about chain reactions being the only way
> that neurons fire is not correct. You owe the brain an apology.
> http://www.youtube.com/watch?v=VaQ66lDZ-08
>
> Please note: "Coherent SPONTANEOUS activity"
> http://jn.physiology.org/content/96/6/3517.full?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&author1=vincent&searchid=1&FIRSTINDEX=0&sortspec=relevance&resourcetype=HWCIT
>
>>and each of those neurons fires
>> due to the influence of the neurons surrounding it, and so on,
>> accounting for all the neurons in the brain.
>
> This is a fairy tale which I have not even heard anyone else claim
> before.

A neuron will fire or not fire due to its internal state and the
influence of its environment. It's internal state includes, for
example, the resting membrane potential, the intracellular
concentration of sodium, potassium and calcium ions, and the type and
number of receptor proteins in the membrane. The environment includes
which other neurons it interfaces with, the type and concentration of
neurotransmitters these neurons may be releasing, the temperature, pH
and ionic concentrations in the extracellular fluid, and so on. These
factors all go into determining whether the neuron will trigger or
not. The analysis applies to every neuron in the brain including the
spontaneously active ones. The same thing applies if there is one
neuron or a hundred billion neurons, although the large number of
neurons will result in much more complex behaviour.

>>These are the third
>> person observable effects; associated with (or identical to, or
>> another aspect of, or supervening on, or a side-effect of - it doesn't
>> change the argument) this observable activity are the thoughts and
>> feelings. A scientist cannot see the thoughts and feelings, since they
>> are non-observable.
>
> They are observable directly to the subject. A scientist can research
> her behavior of her own brain if she wants to.
>
>> The non-observable thoughts and feelings cannot
>> affect the observable physical activity,
>
> If they did not affect the observable physical world then I could not
> type to you my thoughts right now. You position is utterly invalid if
> genuine, and not even entertaining if trollery.
>
>> for if they could, the
>> scientist would see apparently magical events.
>
> Like voluntary movement of body parts and speech?

You're just not getting it. It's not that movement in the body is not
due to thought, but we can't see the thought, we can only see the
underlying physical events. So to a scientist, every movement in the
body can be attributed to a chain of physical events. If a thought can
cause a movement in the absence of a physical event, for example if
ligand-dependent ion channels open and trigger an action potential in
the absence of the ligand, that would be observed as magical, like a
table levitating. You seem to think that not only neurons but every
cell has the capacity to do this sort of thing; so why has no
scientist ever reported it?

>>We can still say that
>> thought A leads to feeling B, but what the scientist observes is that
>> brain state A' (associated with thought A) leads to brain state B'
>> (associated with feeling B). So although we can tell the story of the
>> person in terms of thoughts and feelings, the scientist can tell the
>> same story in terms of biochemical events. If the scientist
>> understands the biochemistry then in theory he will be able to predict
>> everything the person will do (or write probabilistic equations if
>> truly random effects are significant in the brain), although in
>> practice due to the complexity of the system this would be very
>> difficult.
>
> Some brain states do work that way, but some don't. Anyone that moves
> their little finger is going to move it in a neurologically similar
> way, but what people want to do for a career is not determinable in
> the same way. It depends on where they are born, how they are raised,
> what their opportunities are, etc. It's not something which can be
> regressed from brain state Q to some kind of precursor brain state G.

Where they were born, how they are raised, what the weather is like
all has a physical effect on the brain. If some factor has no impact
on the brain then that cannot possibly make a difference to the
person. This is not to say that the person's trajectory through life
can be predicted, but the weather cannot be predicted with certainty
either.

>> "I" am the ensemble of neurons in the brain which when they are
>> functioning properly give rise to consciousness and a sense of
>> identity. "I" never do anything that can't be explained in terms of a
>> chain of neuronal events.
>
> What makes you think that 'giving rise to consciousness and a sense of
> identity' can be explained in terms of a chain of neuronal events.
> It's just because you assume a priori that is what consciousness is.

Without trying to "explain" consciousness I know the circumstances
under which consciousness can be produced.

>> The making sense of what you read occurs due to certain neuronal
>> activity in the language centre of your brain. This may or may not
>> cause you to take a certain action, just as a coin may come up heads
>> or tails.
>
> Why is the making sense necessary at all? Why wouldn't the neuronal
> activity of reading just cause the neuronal activity of taking a
> certain action?

It does - and in so doing, understanding occurs. There are varying
degrees of understanding, ranging from blindly following a protocol to
analysing what you read in depth. If you analyse what you read in
depth the neural processing is more complicated and the resulting
decision more difficult to predict. In each case, the understanding
supervenes on the neural activity. Disembodied understanding does not
come forth from an immaterial soul to move your hand.

>> > A few micrograms of LSD or ricin can change a person's entire life or
>> > end it.
>>
>> Yes, there are crucial parts of the system which don't tolerate
>> disruption. It's the same with any machine.
>
> Are you assuming then that consciousness is not such a disruption
> intolerant part of the system?

Consciousness is affected by small amounts of specific chemicals but
not affected by quite gross physical changes such as the loss of
millions of neurons in the course of a day.

>> >> Whether something is conscious or not has nothing to do with whether
>> >> it is deterministic or predictable.
>>
>> > What makes you think that's true? Do you have a counterfactual?
>>
>> There is no reason to believe that determinism affects consciousness.
>> In general it is impossible to distinguish random from pseudorandom.
>> If the brain utilised true random processes and part of it were
>> replaced with a component that used a pseudorandom number generator
>> with a similar probability function to the true random one we would
>> notice no change in behaviour and the subject would notice no change
>> in consciousness (for if he did there would be a change in behaviour).
>
> So the answer is no, you do not have a counterfactual, and that there
> is nothing that makes you think that it's true other than it cannot be
> proven to be false by non-subjective means. Considering that the whole
> question is about subjectivity, to rule out subjective views may not
> be a scientific way to approach it. To me, it's pretty clear that one
> of the functions of consciousness is to make determinations, and
> therefore presents another another ontological option besides pre-
> determined, random, or pseudorandom. There is a such a thing as
> intentionality, the fact that is cannot be understood through physics
> and computation is not a compelling argument at all to me, it just
> reveals the limitations of our current models of physics.

You didn't understand my argument. I sometimes don't understand yours.

>> A partial zombie occurs if only part of your brain is zombified.
>> Because this part of the brain (by definition) has the same observable
>> third person behaviour as it did before it was zombified, you would
>> lack the qualia of the replaced part while not noticing or behaving
>> differently. It is this which is absurd.
>
> That contradicts your view that the behavior of the mind must all be
> physically observable in the brain. We know that qualia doesn't
> physically exist in the brain, so that makes it a zombie already.

Behaviour is observable, qualia are not. I know I'm not a zombie but
you might be. I also know I'm not a partial zombie.

>>The only way out of the
>> absurdity is to say that it is impossible to make a brain component
>> with the same observable third person behaviour that didn't also have
>> the same qualia. (Sorry for the clumsiness of "observable third person
>> behaviour" - I should just say "behaviour" but I think in the past you
>> have taken this to include consciousness).
>
> No, the way out of it is to see that qualia can be absent, distorted,
> or replaced in the brain. Blind people learn Braille and use the same
> area of the brain that sighted people use for vision, only for tactile
> qualia. Synesthesia also shows that qualia are not fixed to
> functionality, and conversion disorders illustrate absent qualia
> without neurological deficit.
>
> Even if none of those things were true, to say that this unexplainable
> experiential dimension we live in must just 'come with' particular
> mathematical objects because we can't imagine being able to make
> something that acts like us but doesn't live in the same dimension has
> all the earmarks of a terrible theory.

Again I don't think you understand what would happen if you replaced
part of your brain with a qualia-less component that had the same
third person observable behaviour. Perhaps you could tell me in your
own words if you do.

>> If philosophical zombies were possible, they
>> would have the same survival and reproductive success as non-zombies.
>> But philosophical zombies did not evolve, suggesting to me that
>> consciousness is a necessary side-effect of any intelligent being.
>
> Philosophical zombies did evolve. They are called sociopaths.

Zombies lack all qualia, not just a conscience.

>> We have already started engineering brain replacement: cochlear
>> implants, artificial hippocampus. These are crude but it's early days
>> yet.
>
> It wouldn't matter if they were perfect. Using artificial ear to hear
> with is not the same as becoming a computer program. People used to
> use a horn as a hearing aid. If I made a really fancy horn could I
> replace your brain with it?

If the ear can be replaced with impunity why not the auditory nerve,
and if the auditory nerve why not the auditory cortex?

>> The feeling of free will is simply due to the fact that I don't know
>> what I'm going to do until I do it.
>
> Why would there be a feeling associated with that? What purpose would
> it serve to know or not know that you don't know what you are going to
> do if you can't control whether or not you do it?

I do control what I do, but I don't know what decision I'm going to
make until I make it. If I did know what decision I was going to make
then I could change my mind - in which case, I would again be in a
position where I don't know what decision I'm going to make. Not
knowing what decision I'm going to make until I make it is consistent
with determinism.


--
Stathis Papaioannou

Bruno Marchal

unread,
Sep 28, 2011, 10:26:40 AM9/28/11
to everyth...@googlegroups.com

On 27 Sep 2011, at 22:35, Craig Weinberg wrote:

> On Sep 27, 9:20 am, Stathis Papaioannou <stath...@gmail.com> wrote:
> Noooo. Millions of neurons fire simultaneously in separate regions of
> the brain. Your assumptions about chain reactions being the only way
> that neurons fire is not correct. You owe the brain an apology.

Digital machines can emulate parallelism.
In all you answer to Stathis you elude the question by confusing
levels of explanation.
So either you postulate an infinitely low level (and thus infinities
in the brain), or you are introducing the magic mentioned by Stathis.

Bruno


http://iridia.ulb.ac.be/~marchal/

Craig Weinberg

unread,
Sep 28, 2011, 11:45:20 AM9/28/11
to Everything List
On Sep 28, 9:43 am, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Wed, Sep 28, 2011 at 6:35 AM, Craig Weinberg <whatsons...@gmail.com> wrote:
> >> Do you agree that if a
> >> non-observable causes a change in an observable, that would be like
> >> magic from the point of view of a scientist?
>
> > Not at all. We observe 3-p changes caused by 1-p intentionality
> > routinely. There is a study cited recently in that TV documentary
> > where the regions of vegetative patients brains associated with
> > coordinated movements light up an fMRI when being asked to imagine
> > playing tennis.http://web.me.com/adrian.owen/site/Publications_files/Owen-2006-Futur...
> > p. 693-4
> > Why do you want me to think that the ordinary relationship between the
> > brain and the mind is magic? The 'non-observable cause' is the patient
> > voluntarily imagining playing tennis. There is no other cause. They
> > were given a choice between tennis and house, and the result of the
> > fMRI was determined by nothing other than the patient's subjective
> > choice. So will you stop accusing me of witchcraft about this now or
> > is there going to be some other way of making me seem like I am the
> > one rejecting science when it is your position which broadly
> > reimagines the brain as some kind of closed-circuit Rube Goldberg
> > apparatus?
>
> The patient "voluntarily imagines playing tennis" if and only if
> certain neural processes occur in the brain.

The neural processes and the thoughts are different views of the same
thing. In the case of voluntarily imagining something, it is the
subjective content of the experiences being imagined which makes sense
and the neurological processes are the shadow. There is no strictly
neurological reason for their behavior, let alone one that evokes
'tennis'. If it were something involuntary, like a fever coming on,
then the neurological processes would be the active sensemaking agent
and the experience of getting sick would be the shadow. It's bi-
directional. I know that you won't admit that that could ever be the
case, but I don't understand why.
> >http://jn.physiology.org/content/96/6/3517.full?maxtoshow=&HITS=10&hi...
We can't see the thought through a microscope, no. We can only see it
from inside of the brain, but from here, of course we can see the
thought.

> So to a scientist, every movement in the
> body can be attributed to a chain of physical events.

What kind of a scientist? Do you not consider psychology, sociology,
or anthropology sciences?

> If a thought can
> cause a movement in the absence of a physical event, for example if
> ligand-dependent ion channels open and trigger an action potential in
> the absence of the ligand, that would be observed as magical, like a
> table levitating.

The thought *is* a physical event, it's just the subjective view of
it. It's many physical events, each with a subjective view, but
together, rather than forming a machine of objects related in space,
the experiential side is experiences over time which are shared as a
single, deeper, richer experience stream over time.

As for 'getting it', every time that you mention magic, I know that
you haven't yet understood what I'm talking about. I already
understand your view thoroughly, though, because I used to share the
same view.

>You seem to think that not only neurons but every
> cell has the capacity to do this sort of thing; so why has no
> scientist ever reported it?

You're not getting it. It's not a thing that cells have a capacity to
do, it's a thing that cells are. In addition to being a little pocket
of cytoplasm, they are a little perceiving agent, just as a molecule
or atom. What they do is not a function just of what we can see from
the exterior, but of what the experience is on the interior.

>
> >>We can still say that
> >> thought A leads to feeling B, but what the scientist observes is that
> >> brain state A' (associated with thought A) leads to brain state B'
> >> (associated with feeling B). So although we can tell the story of the
> >> person in terms of thoughts and feelings, the scientist can tell the
> >> same story in terms of biochemical events. If the scientist
> >> understands the biochemistry then in theory he will be able to predict
> >> everything the person will do (or write probabilistic equations if
> >> truly random effects are significant in the brain), although in
> >> practice due to the complexity of the system this would be very
> >> difficult.
>
> > Some brain states do work that way, but some don't. Anyone that moves
> > their little finger is going to move it in a neurologically similar
> > way, but what people want to do for a career is not determinable in
> > the same way. It depends on where they are born, how they are raised,
> > what their opportunities are, etc. It's not something which can be
> > regressed from brain state Q to some kind of precursor brain state G.
>
> Where they were born, how they are raised, what the weather is like
> all has a physical effect on the brain. If some factor has no impact
> on the brain then that cannot possibly make a difference to the
> person. This is not to say that the person's trajectory through life
> can be predicted, but the weather cannot be predicted with certainty
> either.

The physical effect on the brain goes hand in hand with the effects of
their nurture, but it's irrelevant. It's the residue of the
experience, not the cause of it.

>
> >> "I" am the ensemble of neurons in the brain which when they are
> >> functioning properly give rise to consciousness and a sense of
> >> identity. "I" never do anything that can't be explained in terms of a
> >> chain of neuronal events.
>
> > What makes you think that 'giving rise to consciousness and a sense of
> > identity' can be explained in terms of a chain of neuronal events.
> > It's just because you assume a priori that is what consciousness is.
>
> Without trying to "explain" consciousness I know the circumstances
> under which consciousness can be produced.

So you avoid the question altogether, and just take consciousness for
granted.

>
> >> The making sense of what you read occurs due to certain neuronal
> >> activity in the language centre of your brain. This may or may not
> >> cause you to take a certain action, just as a coin may come up heads
> >> or tails.
>
> > Why is the making sense necessary at all? Why wouldn't the neuronal
> > activity of reading just cause the neuronal activity of taking a
> > certain action?
>
> It does - and in so doing, understanding occurs. There are varying
> degrees of understanding, ranging from blindly following a protocol to
> analysing what you read in depth. If you analyse what you read in
> depth the neural processing is more complicated and the resulting
> decision more difficult to predict. In each case, the understanding
> supervenes on the neural activity. Disembodied understanding does not
> come forth from an immaterial soul to move your hand.

You are not answering my question. Why does there need to be
'understanding' at all? You are saying that neurology causes something
to occur: understanding. What do you mean by that. What is it? Magic?
Metaphysics?

>
> >> > A few micrograms of LSD or ricin can change a person's entire life or
> >> > end it.
>
> >> Yes, there are crucial parts of the system which don't tolerate
> >> disruption. It's the same with any machine.
>
> > Are you assuming then that consciousness is not such a disruption
> > intolerant part of the system?
>
> Consciousness is affected by small amounts of specific chemicals but
> not affected by quite gross physical changes such as the loss of
> millions of neurons in the course of a day.

It can be affected by gross physical changes too, such as a
concussion. Every change in substance or function matters, but some
things matter to us more than others depending on who we are and what
we expect our awareness to be like.
How do you explain that qualia is not in the brain?

>
> >>The only way out of the
> >> absurdity is to say that it is impossible to make a brain component
> >> with the same observable third person behaviour that didn't also have
> >> the same qualia. (Sorry for the clumsiness of "observable third person
> >> behaviour" - I should just say "behaviour" but I think in the past you
> >> have taken this to include consciousness).
>
> > No, the way out of it is to see that qualia can be absent, distorted,
> > or replaced in the brain. Blind people learn Braille and use the same
> > area of the brain that sighted people use for vision, only for tactile
> > qualia. Synesthesia also shows that qualia are not fixed to
> > functionality, and conversion disorders illustrate absent qualia
> > without neurological deficit.
>
> > Even if none of those things were true, to say that this unexplainable
> > experiential dimension we live in must just 'come with' particular
> > mathematical objects because we can't imagine being able to make
> > something that acts like us but doesn't live in the same dimension has
> > all the earmarks of a terrible theory.
>
> Again I don't think you understand what would happen if you replaced
> part of your brain with a qualia-less component that had the same
> third person observable behaviour. Perhaps you could tell me in your
> own words if you do.

What would really happen is that it could not have the same third
person observable behavior. If someone is deaf, you cannot observe
their lack of hearing by observing them, unless you intentionally try
to test them. If you replace someones eyes with eyes which only see in
the x-ray spectrum, then the visual cortex would pick it up in the
familiar colors of the visible spectrum. If you replaced the visual
cortex with something that processes optical stimulation in the eyes
invisibly, then the patient would see nothing but would develop
perceptual compensation from their other senses very rapidly compared
with someone who went blind suddenly. They would have to learn to read
their new optical capacity and it would not be visual, but it would
enable them eventually to behave as a sighted person in most relevant
ways.


>
> >> If philosophical zombies were possible, they
> >> would have the same survival and reproductive success as non-zombies.
> >> But philosophical zombies did not evolve, suggesting to me that
> >> consciousness is a necessary side-effect of any intelligent being.
>
> > Philosophical zombies did evolve. They are called sociopaths.
>
> Zombies lack all qualia, not just a conscience.

It's still a valid example of a partial zombie.

>
> >> We have already started engineering brain replacement: cochlear
> >> implants, artificial hippocampus. These are crude but it's early days
> >> yet.
>
> > It wouldn't matter if they were perfect. Using artificial ear to hear
> > with is not the same as becoming a computer program. People used to
> > use a horn as a hearing aid. If I made a really fancy horn could I
> > replace your brain with it?
>
> If the ear can be replaced with impunity why not the auditory nerve,
> and if the auditory nerve why not the auditory cortex?
>
> >> The feeling of free will is simply due to the fact that I don't know
> >> what I'm going to do until I do it.
>
> > Why would there be a feeling associated with that? What purpose would
> > it serve to know or not know that you don't know what you are going to
> > do if you can't control whether or not you do it?
>
> I do control what I do, but I don't know what decision I'm going to
> make until I make it. If I did know what decision I was going to make
> then I could change my mind - in which case, I would again be in a
> position where I don't know what decision I'm going to make. Not
> knowing what decision I'm going to make until I make it is consistent
> with determinism.

But what would be the point of knowing about any of that at all? Why
does there need to be an experience of the decision being made?

Craig

Craig Weinberg

unread,
Sep 28, 2011, 11:48:42 AM9/28/11
to Everything List
Yes, this is just a tangent, I'm trying to show that the model of the
brain as a chain reaction is factually incorrect. I agree, parallelism
says nothing about whether it's computational or not, it's just that
Stathis is trying to actually claim that psychological processes
cannot drive lower level neurology.

Craig

Bruno Marchal

unread,
Sep 29, 2011, 3:21:47 AM9/29/11
to everyth...@googlegroups.com

In a sense I can follow you. If I feel in pain I can take a drug, and
in this case a high level psychological process can change a lower
level neuro process. But I am sure Stathis agree with this. That whole
cycle can still be driven by still lower computable laws. A universal
machine can emulate another self-transforming universal machine.
That's the point.

Bruno


http://iridia.ulb.ac.be/~marchal/

Craig Weinberg

unread,
Sep 29, 2011, 8:36:09 AM9/29/11
to Everything List
I would still say that at some point 'I' participates directly and non-
deterministically in the process. Even if only to arbitrate between
many conflicting subordinate senses and motives, all of which I
suspect have more deterministic but still 'not-as-deterministic-as'
processes such as those within inorganic molecules or atoms. The I
herself may not be completely non-computational and indeterministic,
but that doesn't mean that she has no control of her thoughts,
opinions, and actions either.

What my hypothesis offers though, is to make concrete the abstraction
of 'lower computable laws' so that they are not metaphysical, but
intraphysical. They are actual sensorimotive experiences localized in
and through matter. Like the laws we follow as citizens, we are
compelled to do so by the senses of social identification and motives
of avoiding negative consequences. It's a subjective experience which
can be abstracted into a formula with reasonable success, but the
experience is not the same thing as the formula, the map is not the
territory, etc.

If we recognize that the example of how we as individuals follow
'laws', not because those laws are metaphysical programs which are
deterministically executed on unwitting helpless voyeurs, but because
the customs, practices, and expectations of our niche are
recapitulated locally in the individual as sensorimotive dynamics. The
actual literal process by which laws and customs are upheld is not
though explicit codes, it's because we don't like the feeling of going
against our conditioning. It makes us nervous and ashamed; fearful,
etc.

I'm not saying that atoms bond together because they are lonely or the
Krebs cycle propagates because citric acid was raised to believe that
it has a job to do, but that in each case there is likely a
corresponding nano-sensorimotive experience going on. When you realize
that the senses and motives which have grown great enough to catch the
attention of the 'I', that they are trillions of times more saturated
and nuanced than those sensorimotives arising from the individual
cells and molecules, we can see that the proto-experience of an
individual neuron-eukaryote need not be anthropomorphized to a large
extent.

It can be calculated as a history of action potentials, but that
doesn't explain what the action potentials actually are. They are semi-
voluntary (some more voluntary than others) participatory spasms. We
are used to imagining these impulses like electric sparks or flashes
of light inside the brain, but they only look like sparks when viewed
through a device which records them that way. To the naked eye you
won't see any sparks, and to the subject whose brain it is, there are
only thoughts, images, and feelings.

Craig

Bruno Marchal

unread,
Sep 29, 2011, 10:29:01 AM9/29/11
to everyth...@googlegroups.com


I don't feel this very compelling.
You have to assume some primitive matter, and notion of localization.
This is the kind of strong metaphysical and aristotleian assumption
which I am not sure to see the need for, beyond extrapolating from our
direct experience.
You have to assume mind, and a form of panpsychism, which seems to me
as much problematic than what it is supposed to explain or at least
describe.
The link between both remains as unexplainable as before.

You attribute to me a metaphysical assumption, where I assume only
what is taught in high school to everyone, + the idea that at some
level matter (not primitive matter, but the matter we can observe when
we look at our bodies) obeys deterministic laws, where you make three
metaphysical assumptions: matter, mind and a link which refer to
notion that you don't succeed to define (like sensorimotive).

Then you derive from this that the third person "I" is not Turing
emulable, but this appears to be non justified too, even if we are
willing to accept some meaning in those nanosensorimotive actions
(which I am not, for I don't have a clue about what they can be).

Bruno


http://iridia.ulb.ac.be/~marchal/

Stathis Papaioannou

unread,
Sep 29, 2011, 10:31:30 AM9/29/11
to everyth...@googlegroups.com
On Thu, Sep 29, 2011 at 1:45 AM, Craig Weinberg <whats...@gmail.com> wrote:

> The neural processes and the thoughts are different views of the same
> thing. In the case of voluntarily imagining something, it is the
> subjective content of the experiences being imagined which makes sense
> and the neurological processes are the shadow. There is no strictly
> neurological reason for their behavior, let alone one that evokes
> 'tennis'. If it were something involuntary, like a fever coming on,
> then the neurological processes would be the active sensemaking agent
> and the experience of getting sick would be the shadow. It's bi-
> directional. I know that you won't admit that that could ever be the
> case, but I don't understand why.

There *is* a strictly neurological reason for the 3-P observable
behaviour. If we limit ourselves to talking about that, do you agree?

>> If a thought can
>> cause a movement in the absence of a physical event, for example if
>> ligand-dependent ion channels open and trigger an action potential in
>> the absence of the ligand, that would be observed as magical, like a
>> table levitating.
>
> The thought *is* a physical event, it's just the subjective view of
> it. It's many physical events, each with a subjective view, but
> together, rather than forming a machine of objects related in space,
> the experiential side is experiences over time which are shared as a
> single, deeper, richer experience stream over time.

But you can't see the thought. Restrict discussion for now to the 3-P
observable behaviour of a neuron being investigated by a cell
biologist. From the scientist's point of view, the neuron only fires
in response to stimuli such as neurotransmitters at the synapse
(depending on what sort of neuron it is). Do you see that if the
thought makes the neuron do anything other than what the scientist
expects it to do from consideration of its physical properties and the
physical properties of the environment then it would be observed to be
behaving magically?

> You are not answering my question. Why does there need to be
> 'understanding' at all? You are saying that neurology causes something
> to occur: understanding. What do you mean by that. What is it? Magic?
> Metaphysics?

It's something which cannot be reduced to something simpler.

>> Again I don't think you understand what would happen if you replaced
>> part of your brain with a qualia-less component that had the same
>> third person observable behaviour. Perhaps you could tell me in your
>> own words if you do.
>
> What would really happen is that it could not have the same third
> person observable behavior. If someone is deaf, you cannot observe
> their lack of hearing by observing them, unless you intentionally try
> to test them. If you replace someones eyes with eyes which only see in
> the x-ray spectrum, then the visual cortex would pick it up in the
> familiar colors of the visible spectrum. If you replaced the visual
> cortex with something that processes optical stimulation in the eyes
> invisibly, then the patient would see nothing but would develop
> perceptual compensation from their other senses very rapidly compared
> with someone who went blind suddenly. They would have to learn to read
> their new optical capacity and it would not be visual, but it would
> enable them eventually to behave as a sighted person in most relevant
> ways.

The replacement part reproduces the 3-P behaviour of the biological
part. This means the rest of the brain also has the same 3-P
behaviour, since it is subjected to the same 3-P environmental
influences from the replacement part (that is what was reproduced,
even if the qualia were not). So the subject behaves as if he has
normal vision and hearing and believes that he has normal vision and
hearing.

You may object that the rest of the subject's brain does not behave
normally since it lacks the input from the qualia. But if the qualia
affect neurons directly, over and above what you would expect from the
qualia-less physical activity, that would mean that magical events are
observed.


--
Stathis Papaioannou

Craig Weinberg

unread,
Sep 29, 2011, 7:38:27 PM9/29/11
to Everything List
On Sep 29, 10:29 am, Bruno Marchal <marc...@ulb.ac.be> wrote:

> I don't feel this very compelling.
> You have to assume some primitive matter, and notion of localization.  

Why? I think you only have to assume the appearance of matter and
localization, which we do already.

> This is the kind of strong metaphysical and aristotleian assumption  
> which I am not sure to see the need for, beyond extrapolating from our  
> direct experience.

Is it better to extrapolate only from indirect experience?

> You have to assume mind, and a form of panpsychism, which seems to me  
> as much problematic than what it is supposed to explain or at least  
> describe.

It wouldn't be panpsychism exactly, any more than neurochemistry is
panbrainism. The idea is that whatever sensorimotive experience taking
place at these microcosmic levels is nothing like what we, as a
conscious collaboration of trillions of these things, can relate to.
It's more like protopsychism.

> The link between both remains as unexplainable as before.

Mind would be a sensorimotive structure. The link between the
sensorimotive and electromagnetic is the invariance between the two.

>
> You attribute to me a metaphysical assumption, where I assume only  
> what is taught in high school to everyone, + the idea that at some  
> level matter (not primitive matter, but the matter we can observe when  
> we look at our bodies) obeys deterministic laws, where you make three  
> metaphysical assumptions: matter, mind and a link which refer to  
> notion that you don't succeed to define (like sensorimotive).
>
> Then you derive from this that the third person "I" is not Turing  
> emulable, but this appears to be non justified too, even if we are  
> willing to accept some meaning in those nanosensorimotive actions  
> (which I am not, for I don't have a clue about what they can be).

The "I" is always first person. The brain or body would be third
person. What do you think of Super-Turing computation?

Craig

Craig Weinberg

unread,
Sep 29, 2011, 7:43:00 PM9/29/11
to Everything List
On Sep 29, 10:31 am, Stathis Papaioannou <stath...@gmail.com> wrote:

> There *is* a strictly neurological reason for the 3-P observable
> behaviour. If we limit ourselves to talking about that, do you agree?

I would say no, because I would not describe something like 'gambling'
as strictly neurological reason in the sense that I think you intend
it. Of course all of our feelings and perceptions are neurological
experiences in the broad sense, but that's just because neurological
structures feel and perceive.

>
> >> If a thought can
> >> cause a movement in the absence of a physical event, for example if
> >> ligand-dependent ion channels open and trigger an action potential in
> >> the absence of the ligand, that would be observed as magical, like a
> >> table levitating.
>
> > The thought *is* a physical event, it's just the subjective view of
> > it. It's many physical events, each with a subjective view, but
> > together, rather than forming a machine of objects related in space,
> > the experiential side is experiences over time which are shared as a
> > single, deeper, richer experience stream over time.
>
> But you can't see the thought. Restrict discussion for now to the 3-P
> observable behaviour of a neuron being investigated by a cell
> biologist. From the scientist's point of view, the neuron only fires
> in response to stimuli such as neurotransmitters at the synapse
> (depending on what sort of neuron it is).

No, you can see in that brain animation that the neuron fires whenever
it needs to /wants to. It's actions aren't inevitable or scheduled in
some way, it's responding directly to the overall perceptions and
motives of the person as a whole as well as all of the congruences and
conflicts amongst the subordinate neural pathways.

>Do you see that if the
> thought makes the neuron do anything other than what the scientist
> expects it to do from consideration of its physical properties and the
> physical properties of the environment then it would be observed to be
> behaving magically?
>

Only if the scientist knew nothing about neurology. What the neurons
do, collectively *is* thought. It's no different than how bacteria use
quorum sensing to make collective decisions. I understand why you
might assume that neurons are passive receptacles to some kind of
neurological law, but that is not the case. They are living organisms.

Look at how heart cells synchronize: http://www.youtube.com/watch?v=RO4pAc21M24

They communicate with each other. They do things as a group. Neurons
are much more complicated and unpredictable. They are able to do what
they want to do, not just what they have to do.

> > You are not answering my question. Why does there need to be
> > 'understanding' at all? You are saying that neurology causes something
> > to occur: understanding. What do you mean by that. What is it? Magic?
> > Metaphysics?
>
> It's something which cannot be reduced to something simpler.
>

Irreducibility is just a characteristic of it. Saying that doesn't
explain what it is.

>
>
>
>
>
>
> >> Again I don't think you understand what would happen if you replaced
> >> part of your brain with a qualia-less component that had the same
> >> third person observable behaviour. Perhaps you could tell me in your
> >> own words if you do.
>
> > What would really happen is that it could not have the same third
> > person observable behavior. If someone is deaf, you cannot observe
> > their lack of hearing by observing them, unless you intentionally try
> > to test them. If you replace someones eyes with eyes which only see in
> > the x-ray spectrum, then the visual cortex would pick it up in the
> > familiar colors of the visible spectrum. If you replaced the visual
> > cortex with something that processes optical stimulation in the eyes
> > invisibly, then the patient would see nothing but would develop
> > perceptual compensation from their other senses very rapidly compared
> > with someone who went blind suddenly. They would have to learn to read
> > their new optical capacity and it would not be visual, but it would
> > enable them eventually to behave as a sighted person in most relevant
> > ways.
>
> The replacement part reproduces the 3-P behaviour of the biological
> part.

I think that's a mistake to begin with. Does a plastic plant reproduce
the 3-P behavior of a biological plant? If you don't know what the
behavior is, how do you know that it's possible to reproduce it by
other means? You're just assuming that it is like a metal washer that
can be replaced with a plastic one, but we have no reason to think
that this is anything like that.

>This means the rest of the brain also has the same 3-P
> behaviour, since it is subjected to the same 3-P environmental
> influences from the replacement part (that is what was reproduced,
> even if the qualia were not). So the subject behaves as if he has
> normal vision and hearing and believes that he has normal vision and
> hearing.

It's jumping to a conclusion based on an incorrect assumption. I've
already outlined exactly what I think would happen in the different
replacement scenarios. None of them involve the subject immediately
behaving as if they could see without seeing. It's a blatantly
simplistic and overly theoretical proposition to see this replacement
idea as illuminating. It's a bad example. It's not even good science
fiction.

>
> You may object that the rest of the subject's brain does not behave
> normally since it lacks the input from the qualia. But if the qualia
> affect neurons directly, over and above what you would expect from the
> qualia-less physical activity, that would mean that magical events are
> observed.

The qualia affect the experience of the neuron's interior. Whether
that translates into some motive output behavior depends on the
neuron's interpretation. You are not seeing that there is constant
interaction between the sensorimotive interior and electromagnetic
exterior - sometimes it's the sensorimotive inducing the
electromagnetic, and sometimes it's the other way around. You can't
always tell which is which from the outside or the inside.

Craig

Jason Resch

unread,
Sep 29, 2011, 11:14:39 PM9/29/11
to everyth...@googlegroups.com
Craig, do the neurons violate the conservation of energy and
momentum? And if not, then how can they have any unexpected effects?

Jason

On Sep 29, 2011, at 6:43 PM, Craig Weinberg <whats...@gmail.com>
wrote:

Bruno Marchal

unread,
Sep 30, 2011, 4:56:21 AM9/30/11
to everyth...@googlegroups.com

On 30 Sep 2011, at 01:38, Craig Weinberg wrote:

> On Sep 29, 10:29 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>
>> I don't feel this very compelling.
>> You have to assume some primitive matter, and notion of localization.
>
> Why? I think you only have to assume the appearance of matter and
> localization, which we do already.

That would make my point, except it is not clear, especially with what
you said before.
Appearance to who, and to what kind of object?
You loss me completely.

>
>> This is the kind of strong metaphysical and aristotleian assumption
>> which I am not sure to see the need for, beyond extrapolating from
>> our
>> direct experience.
>
> Is it better to extrapolate only from indirect experience?

It is better to derive from clear assumptions.


>
>> You have to assume mind, and a form of panpsychism, which seems to me
>> as much problematic than what it is supposed to explain or at least
>> describe.
>
> It wouldn't be panpsychism exactly, any more than neurochemistry is
> panbrainism. The idea is that whatever sensorimotive experience taking
> place at these microcosmic levels

But now you have to define this, and explain where the microcosmos
illusion comes from, or your theory is circular.


> is nothing like what we, as a
> conscious collaboration of trillions of these things, can relate to.
> It's more like protopsychism.

... and where that protopsychism come from, and what is it.
Could you clearly separate your assumptions, and your reasoning (if
there is any). I just try to understand.

>
>> The link between both remains as unexplainable as before.
>
> Mind would be a sensorimotive structure.

A physical structure? A mathematical structure? A theological structure?


> The link between the
> sensorimotive and electromagnetic is the invariance between the two.

?


>
>>
>> You attribute to me a metaphysical assumption, where I assume only
>> what is taught in high school to everyone, + the idea that at some
>> level matter (not primitive matter, but the matter we can observe
>> when
>> we look at our bodies) obeys deterministic laws, where you make three
>> metaphysical assumptions: matter, mind and a link which refer to
>> notion that you don't succeed to define (like sensorimotive).
>>
>> Then you derive from this that the third person "I" is not Turing
>> emulable, but this appears to be non justified too, even if we are
>> willing to accept some meaning in those nanosensorimotive actions
>> (which I am not, for I don't have a clue about what they can be).
>
> The "I" is always first person.

I don't think so. When I say that my child is hungry, I refer to a 1-I
in the third person way. That's empathy.
And there is also a 3-I, which is the body, or its local description
handled by the "doctor". They correspond in the theory to an abstract
notion of Gödel number. It is our "code" (at the right level) in the
comp frame.
In fact there is as many notion of I than there are intensional
variants of self-reference. They all have a role in the shaping of
reality.


> The brain or body would be third
> person. What do you think of Super-Turing computation?

Which one?
Most are Turing emulated by the UD, and correspond to Turing's notion
of Oracle computable machine. It is an open problem if such form of TM
can exist physically, both in usual physics and in the comp physics.
Of course there might be notions of super-Turing machine being not
digitally emulable (even with oracle). You can use them to illustrate
your non-comp theory. That would make your theory far clearer indeed.

Bruno

http://iridia.ulb.ac.be/~marchal/

Craig Weinberg

unread,
Sep 30, 2011, 8:22:45 AM9/30/11
to Everything List
On Sep 29, 11:14 pm, Jason Resch <jasonre...@gmail.com> wrote:
> Craig, do the neurons violate the conservation of energy and  
> momentum?  And if not, then how can they have any unexpected effects?
>

No. If you are wondering whether I think that anything that
contradicts established observations of physics, chemistry, or biology
is going on, the answer has always been no, and the fact that you are
still asking means that you don't understand what I've said.

As long as you expect neurons to act like neurons - including to have
self-directed thoughts and feelings at least in large groups, then
they do not have any unexpected effects. If you watch a color TV
program on a black and white TV, you aren't going to see any
'unexpected effects'. That's the point, what we expect from physics is
not sufficient to explain all of the properties which we know first
hand are present. You have to first look at it using a 'Color TV'.

If you move your arm, does it violate the conservation of energy and
momentum? No. Life is not a closed system. It exports entropy outside
of itself so that it does what it needs to do and what it wants to do.
You have to look at what is actually occurring in real life instead of
starting from the black and white TV of physics and trying to shoehorn
the technicolor, non-deterministic universe into it.

Craig

Jason Resch

unread,
Sep 30, 2011, 10:16:12 AM9/30/11
to everyth...@googlegroups.com

On Sep 30, 2011, at 7:22 AM, Craig Weinberg <whats...@gmail.com>
wrote:

> On Sep 29, 11:14 pm, Jason Resch <jasonre...@gmail.com> wrote:


>> Craig, do the neurons violate the conservation of energy and
>> momentum? And if not, then how can they have any unexpected effects?
>>
>
> No. If you are wondering whether I think that anything that
> contradicts established observations of physics, chemistry, or biology
> is going on, the answer has always been no, and the fact that you are
> still asking means that you don't understand what I've said.

If it seems that I have misunderstood it is because I see a
contradiction. If a neuron opens it's ion channels because of a
thought, then thought is something we can see all the correlates of in
terms of third person observable particle collisions. If the ion
channel were to open without the observable and necessary particle
collisions then the neuron would be violating the conservation if
momentum.

Jason

>
>
> As long as you expect neurons to act like neurons - including to have
> self-directed thoughts and feelings at least in large groups, then
> they do not have any unexpected effects. If you watch a color TV
> program on a black and white TV, you aren't going to see any
> 'unexpected effects'. That's the point, what we expect from physics is
> not sufficient to explain all of the properties which we know first
> hand are present. You have to first look at it using a 'Color TV'.
>
> If you move your arm, does it violate the conservation of energy and
> momentum? No. Life is not a closed system. It exports entropy outside
> of itself so that it does what it needs to do and what it wants to do.
> You have to look at what is actually occurring in real life instead of
> starting from the black and white TV of physics and trying to shoehorn
> the technicolor, non-deterministic universe into it.
>
> Craig
>

Craig Weinberg

unread,
Sep 30, 2011, 8:44:15 PM9/30/11
to Everything List
On Sep 30, 10:16 am, Jason Resch <jasonre...@gmail.com> wrote:
> On Sep 30, 2011, at 7:22 AM, Craig Weinberg <whatsons...@gmail.com>  
> wrote:
>
> > On Sep 29, 11:14 pm, Jason Resch <jasonre...@gmail.com> wrote:
> >> Craig, do the neurons violate the conservation of energy and
> >> momentum?  And if not, then how can they have any unexpected effects?
>
> > No. If you are wondering whether I think that anything that
> > contradicts established observations of physics, chemistry, or biology
> > is going on, the answer has always been no, and the fact that you are
> > still asking means that you don't understand what I've said.
>
> If it seems that I have misunderstood it is because I see a  
> contradiction.  If a neuron opens it's ion channels because of a  
> thought, then thought is something we can see all the correlates of in  
> terms of third person observable particle collisions.  If the ion  
> channel were to open without the observable and necessary particle  
> collisions then the neuron would be violating the conservation if  
> momentum.

It's not the particle collisions that cause an ion channel to open,
it's the neuron's sensitivity to specific electrochemical conditions
associated with neurotransmitter molecules, and it's ability to
respond with a specific physical change. All of those changes are
accompanied by qualitative experiences on that microcosmic level. Our
thoughts do not cause the ion channels to directly open or close any
more than a screen writer causes the pixels of your TV to get brighter
or dimmer, you are talking about two entirely different scales of
perception. Think of our thoughts and feelings as the 'back end' of
the total physical 'front end' activity of the brain. The back end
thoughts and feelings cannot be reduced to the front end activities of
neurons or ion channels, but they can be reduced to the back end
experiences of those neurons or ion channels - almost, except that
they synergize in a more significant way than front end phenomena can.

Think of it like a fractal vis if you want, where the large design is
always emerging from small designs, but imagine that the large design
and the small designs are both controlled by separate, but overlapping
intelligences so that sometimes the small forms change and propagate
to the larger picture and other times the largest picture changes and
all of the smaller images are consequently changed. Now imagine that
the entire fractal dynamic has an invisible, private backstage to it,
which has no fractal shapes developing and shifting every second, but
it has instead flavors and sounds that change at completely different
intervals of time than the front end fractal, so that the pulsating
rhythms of the fractal are represented on the back end as long
melodies and fragrant journeys.

Both the visual fractal and the olfactory musical follow some of the
same cues exactly and both of them diverge from each other completely
as well so that you cannot look at the fractal and find some graphic
mechanism that produces a song, and the existence of the song does not
mean that there is an invisible musicality pushing the pixels of the
fractal around, it's just that they are like the two ends of a bowtie;
one matter across space and the other experience through time. They
influence each other - sometimes intentionally, sometimes arbitrarily,
and sometimes in a conflicting or self defeating way.

Craig

Craig Weinberg

unread,
Sep 30, 2011, 9:39:24 PM9/30/11
to Everything List
On Sep 30, 4:56 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 30 Sep 2011, at 01:38, Craig Weinberg wrote:
>
> > On Sep 29, 10:29 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>
> >> I don't feel this very compelling.
> >> You have to assume some primitive matter, and notion of localization.
>
> > Why? I think you only have to assume the appearance of matter and
> > localization, which we do already.
>
> That would make my point, except it is not clear, especially with what  
> you said before.
> Appearance to who, and to what kind of object?
> You loss me completely.

The matter that seems like substance to us from our naive perception
seems substantial because of what it is that we actually are. Matter
on different scales and densities might be invisible and intangible,
or like the planet as a whole, just out of range. What we experience
externally is only the liminal surfaces which face the gaps between
matter. The interior of matter is nothing like a substance, it's the
opposite of a substance, it's a sensorimotive experience over time.

The singularity is all the matter that there is, was, and will be, but
it has no exterior - no cracks made of space or time, it's all
interiority. It's feelings, images, experiences, expectations, dreams,
etc, and whatever countless other forms might exist in the cosmos. You
can use arithmetic to render an impersonation of feeling, as you can
write a song that feels arithmetic - but not all songs feel
arithmetic. You can write a poem about a color or you can write an
equation about visible electromagnetism, but neither completely
describe either color or electromagnetism.

>
>
>
> >> This is the kind of strong metaphysical and aristotleian assumption
> >> which I am not sure to see the need for, beyond extrapolating from  
> >> our
> >> direct experience.
>
> > Is it better to extrapolate only from indirect experience?
>
> It is better to derive from clear assumptions.

Clear assumptions can be the most misleading kind.

>
>
>
> >> You have to assume mind, and a form of panpsychism, which seems to me
> >> as much problematic than what it is supposed to explain or at least
> >> describe.
>
> > It wouldn't be panpsychism exactly, any more than neurochemistry is
> > panbrainism. The idea is that whatever sensorimotive experience taking
> > place at these microcosmic levels
>
> But now you have to define this, and explain where the microcosmos  
> illusion comes from, or your theory is circular.

I don't think there is a microcosmos illusion, unless you are talking
about the current assumptions of the Standard Model as particles.
That's not an illusion though, just a specialized interpretation that
doesn't scale up to the macrocosm. As far as where sensorimotive
phenomena comes from, it precedes causality. 'Comes from' is a
sensorimotive proposition and not the other way around. The
singularity functions inherently as supremacy of orientation, and
sense and motive are energetic functions of the difference between it
and it's existential annihilation through time and space.

>
> > is nothing like what we, as a
> > conscious collaboration of trillions of these things, can relate to.
> > It's more like protopsychism.
>
> ... and where that protopsychism come from, and what is it.
> Could you clearly separate your assumptions, and your reasoning (if  
> there is any). I just try to understand.

Specifically, like if you have any two atoms, something must have a
sense of what is supposed to happen when they get close to each other.
Iron atoms have a particular way of relating that's different from
carbon atoms, and that relation can be quantified. That doesn't mean
that the relation is nothing but a quantitative skeleton. There is an
actual experience going on - an attraction, a repulsion, momentum,
acceleration...various states of holding, releasing, or binding a
'charge'. What looks like a charge to us under a microscope is in fact
a proto-feeling with an associated range of proto-motivations.

>
>
>
> >> The link between both remains as unexplainable as before.
>
> > Mind would be a sensorimotive structure.
>
> A physical structure? A mathematical structure? A theological structure?

No, a sensorimotive structure - which could encompass mathematical,
theological, or physical styles. It's an experience that plays out
over time and has participatory aspects. Some parts of the structure
are quite literal and map to muscle movements and discrete neural
pathways, and other ranges are lower frequency, broader, deeper, more
continuous and poetic non-structure. It's a much wider band than that
which is observable through physical instruments or computational
devices, but physical and computational aspects of the cosmos have
much precise and clear structures which exhaust our native ability to
process with mind-numbing repetition and detail.

>
> > The link between the
> > sensorimotive and electromagnetic is the invariance between the two.
>
> ?
Feelings and action potentials have some phenomenological overlap.
That's the link. They both map to the same changes at the same place
and time, they just face opposite directions. Electromagnetism is
public front end, sensorimotive is private back end, which for us can
focus it's attention toward the front, back, or the link in between.

>
>
>
>
>
>
>
>
>
>
>
> >> You attribute to me a metaphysical assumption, where I assume only
> >> what is taught in high school to everyone, + the idea that at some
> >> level matter (not primitive matter, but the matter we can observe  
> >> when
> >> we look at our bodies) obeys deterministic laws, where you make three
> >> metaphysical assumptions: matter, mind and a link which refer to
> >> notion that you don't succeed to define (like sensorimotive).
>
> >> Then you derive from this that the third person "I" is not Turing
> >> emulable, but this appears to be non justified too, even if we are
> >> willing to accept some meaning in those nanosensorimotive actions
> >> (which I am not, for I don't have a clue about what they can be).
>
> > The "I" is always first person.
>
> I don't think so. When I say that my child is hungry, I refer to a 1-I  
> in the third person way. That's empathy.

You still don't call your child 'I'. You're right that sensorimotive 1-
p is sharable, as long as you are sufficiently isomorphic to the other
entity.

> And there is also a 3-I, which is the body, or its local description  
> handled by the "doctor". They correspond in the theory to an abstract  
> notion of Gödel number. It is our "code" (at the right level) in the  
> comp frame.
> In fact there is as many notion of I than there are intensional  
> variants of self-reference. They all have a role in the shaping of  
> reality.

The subjective is a continuum from most subjective - imagination,
interior monologue, etc to the ego, the body, clothes, possessions,
language, home, memory, friends, work, interests, etc to the
objective; partnerships, causes, philosophies, career, community,
species, planet, etc. Yes, I agree they all have a role to play in the
shaping of reality.

>
> > The brain or body would be third
> > person. What do you think of Super-Turing computation?
>
> Which one?
> Most are Turing emulated by the UD, and correspond to Turing's notion  
> of Oracle computable machine. It is an open problem if such form of TM  
> can exist physically, both in usual physics and in the comp physics.
> Of course there might be notions of super-Turing machine being not  
> digitally emulable (even with oracle). You can use them to illustrate  
> your non-comp theory. That would make your theory far clearer indeed.

I was curious about Hava Siegelmann's theories about analog
computation.

Craig

Stathis Papaioannou

unread,
Oct 1, 2011, 6:14:35 AM10/1/11
to everyth...@googlegroups.com

I'm afraid the analogies you use don't help, at least for me. Does an
ion channel ever open in the absence of an observable cause? It's a
simple yes/no question. Whether consciousness is associated,
supervenient, linked, provided by God or whatever is a separate
question.


--
Stathis Papaioannou

Bruno Marchal

unread,
Oct 1, 2011, 10:13:23 AM10/1/11
to everyth...@googlegroups.com


I have no clue what you are taking about.
That your conclusion makes some arithmetical being looking like
impersonal zombie is just racism for me.
So I see a sort of racism against machine or numbers, justified by
unintelligible sentences.

>
>>
>>
>>
>>>> This is the kind of strong metaphysical and aristotleian assumption
>>>> which I am not sure to see the need for, beyond extrapolating from
>>>> our
>>>> direct experience.
>>
>>> Is it better to extrapolate only from indirect experience?
>>
>> It is better to derive from clear assumptions.
>
> Clear assumptions can be the most misleading kind.

But that is the goal. Celar assumption leads to clear misleading,
which can then be corrected with respect to facts, or repeatable
experiments.
Unclear assumptions lead to arbitrariness, racism, etc.

>
>>
>>
>>
>>>> You have to assume mind, and a form of panpsychism, which seems
>>>> to me
>>>> as much problematic than what it is supposed to explain or at least
>>>> describe.
>>
>>> It wouldn't be panpsychism exactly, any more than neurochemistry is
>>> panbrainism. The idea is that whatever sensorimotive experience
>>> taking
>>> place at these microcosmic levels
>>
>> But now you have to define this, and explain where the microcosmos
>> illusion comes from, or your theory is circular.
>
> I don't think there is a microcosmos illusion, unless you are talking
> about the current assumptions of the Standard Model as particles.
> That's not an illusion though, just a specialized interpretation that
> doesn't scale up to the macrocosm. As far as where sensorimotive
> phenomena comes from, it precedes causality. 'Comes from' is a
> sensorimotive proposition and not the other way around. The
> singularity functions inherently as supremacy of orientation, and
> sense and motive are energetic functions of the difference between it
> and it's existential annihilation through time and space.

That does not help.


>
>>
>>> is nothing like what we, as a
>>> conscious collaboration of trillions of these things, can relate to.
>>> It's more like protopsychism.
>>
>> ... and where that protopsychism come from, and what is it.
>> Could you clearly separate your assumptions, and your reasoning (if
>> there is any). I just try to understand.
>
> Specifically, like if you have any two atoms, something must have a
> sense of what is supposed to happen when they get close to each other.
> Iron atoms have a particular way of relating that's different from
> carbon atoms, and that relation can be quantified. That doesn't mean
> that the relation is nothing but a quantitative skeleton. There is an
> actual experience going on - an attraction, a repulsion, momentum,
> acceleration...various states of holding, releasing, or binding a
> 'charge'. What looks like a charge to us under a microscope is in fact
> a proto-feeling with an associated range of proto-motivations.

Why?


>
>>
>>
>>
>>>> The link between both remains as unexplainable as before.
>>
>>> Mind would be a sensorimotive structure.
>>
>> A physical structure? A mathematical structure? A theological
>> structure?
>
> No, a sensorimotive structure - which could encompass mathematical,
> theological, or physical styles. It's an experience that plays out
> over time and has participatory aspects. Some parts of the structure
> are quite literal and map to muscle movements and discrete neural
> pathways, and other ranges are lower frequency, broader, deeper, more
> continuous and poetic non-structure. It's a much wider band than that
> which is observable through physical instruments or computational
> devices, but physical and computational aspects of the cosmos have
> much precise and clear structures which exhaust our native ability to
> process with mind-numbing repetition and detail.

?
(I let you know that one of my main motivation consists in explaining
the physical, that is explaining it without using physical notions and
assumptions. The same for consciousness).

>
>>
>>> The link between the
>>> sensorimotive and electromagnetic is the invariance between the two.
>>
>> ?
> Feelings and action potentials have some phenomenological overlap.

What is feeling, what is action, what is potential?

> That's the link. They both map to the same changes at the same place
> and time, they just face opposite directions. Electromagnetism is
> public front end, sensorimotive is private back end, which for us can
> focus it's attention toward the front, back, or the link in between.

?

>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>>> You attribute to me a metaphysical assumption, where I assume only
>>>> what is taught in high school to everyone, + the idea that at some
>>>> level matter (not primitive matter, but the matter we can observe
>>>> when
>>>> we look at our bodies) obeys deterministic laws, where you make
>>>> three
>>>> metaphysical assumptions: matter, mind and a link which refer to
>>>> notion that you don't succeed to define (like sensorimotive).
>>
>>>> Then you derive from this that the third person "I" is not Turing
>>>> emulable, but this appears to be non justified too, even if we are
>>>> willing to accept some meaning in those nanosensorimotive actions
>>>> (which I am not, for I don't have a clue about what they can be).
>>
>>> The "I" is always first person.
>>
>> I don't think so. When I say that my child is hungry, I refer to a
>> 1-I
>> in the third person way. That's empathy.
>
> You still don't call your child 'I'. You're right that sensorimotive
> 1-
> p is sharable, as long as you are sufficiently isomorphic to the other
> entity.

That makes sense, at least by replacing "sensorimotive" by "subjective".


>
>> And there is also a 3-I, which is the body, or its local description
>> handled by the "doctor". They correspond in the theory to an abstract
>> notion of Gödel number. It is our "code" (at the right level) in the
>> comp frame.
>> In fact there is as many notion of I than there are intensional
>> variants of self-reference. They all have a role in the shaping of
>> reality.
>
> The subjective is a continuum from most subjective - imagination,
> interior monologue, etc to the ego, the body, clothes, possessions,
> language, home, memory, friends, work, interests, etc to the
> objective; partnerships, causes, philosophies, career, community,
> species, planet, etc. Yes, I agree they all have a role to play in the
> shaping of reality.

OK.


>
>>
>>> The brain or body would be third
>>> person. What do you think of Super-Turing computation?
>>
>> Which one?
>> Most are Turing emulated by the UD, and correspond to Turing's notion
>> of Oracle computable machine. It is an open problem if such form of
>> TM
>> can exist physically, both in usual physics and in the comp physics.
>> Of course there might be notions of super-Turing machine being not
>> digitally emulable (even with oracle). You can use them to illustrate
>> your non-comp theory. That would make your theory far clearer indeed.
>
> I was curious about Hava Siegelmann's theories about analog
> computation.

That's material phenomenon, and they can be used to perform some
computations, but with digital mechanism, they can be recovered in the
physical reality. They can't be primitive.

Bruno


http://iridia.ulb.ac.be/~marchal/

Jason Resch

unread,
Oct 1, 2011, 11:01:22 AM10/1/11
to everyth...@googlegroups.com
On Fri, Sep 30, 2011 at 7:44 PM, Craig Weinberg <whats...@gmail.com> wrote:
On Sep 30, 10:16 am, Jason Resch <jasonre...@gmail.com> wrote:
> On Sep 30, 2011, at 7:22 AM, Craig Weinberg <whatsons...@gmail.com>  
> wrote:
>
> > On Sep 29, 11:14 pm, Jason Resch <jasonre...@gmail.com> wrote:
> >> Craig, do the neurons violate the conservation of energy and
> >> momentum?  And if not, then how can they have any unexpected effects?
>
> > No. If you are wondering whether I think that anything that
> > contradicts established observations of physics, chemistry, or biology
> > is going on, the answer has always been no, and the fact that you are
> > still asking means that you don't understand what I've said.
>
> If it seems that I have misunderstood it is because I see a  
> contradiction.  If a neuron opens it's ion channels because of a  
> thought, then thought is something we can see all the correlates of in  
> terms of third person observable particle collisions.  If the ion  
> channel were to open without the observable and necessary particle  
> collisions then the neuron would be violating the conservation if  
> momentum.

It's not the particle collisions that cause an ion channel to open,
it's the neuron's sensitivity to specific electrochemical conditions
associated with neurotransmitter molecules,

The neurons sensitivities can be ignored if one looks at the neuron as a collection of particals, and you see interactions between particles rather than between neurons. If you think this is not possible, then you are assuming neurons can do things that would violate the conservation of momentum.
 
and it's ability to
respond with a specific physical change. All of those changes are
accompanied by qualitative experiences on that microcosmic level. Our
thoughts do not cause the ion channels to directly open or close any
more than a screen writer causes the pixels of your TV to get brighter
or dimmer, you are talking about two entirely different scales of
perception. Think of our thoughts and feelings as the 'back end' of
the total physical 'front end' activity of the brain.

I would be more inclined to say they are the "top end" rather than the "back end", as thoughts are built on top of awareness of information, which is built on top of brain behaviors and states, which is built on top of neuron behaviors, which is built on top of chemistry, which is built on top of the particle interactions of physics.  When you describe it as a "back end" it casts a mystical, unprobable and thus unscientific light on the idea, since that explanation ends with "there is no explanation".  Worse, either this invisible back end is tinkering with the trajectories of particles (as in interactionist dualism) or it is just there, having no effect (as an epiphenomenon) and leads to zombies and questions of its purpose.  Alternately, you could adopt Liebniz's approach and say the front end and back end are independent realities which are, using your term, synergized.  But Liebniz's harmony leads to pure idealism, for the existence of minds is enough to explain all observations; there would be no need for a physical word to "force" our observations to agree with physical law.
 
The back end
thoughts and feelings cannot be reduced to the front end activities of
neurons or ion channels, but they can be reduced to the back end
experiences of those neurons or ion channels - almost, except that
they synergize in a more significant way than front end phenomena can.

Think of it like a fractal vis if you want, where the large design is
always emerging from small designs, but imagine that the large design
and the small designs are both controlled by separate, but overlapping
intelligences so that sometimes the small forms change and propagate
to the larger picture and other times the largest picture changes and
all of the smaller images are consequently changed. Now imagine that
the entire fractal dynamic has an invisible, private backstage to it,

Either this invisible and irreducible backstage can alter the direction or energy of particles (thus leading to observable physical differences and effects) or it cannot, making it an unnecessary epiphenomenon.  Which would you say it is?
 
which has no fractal shapes developing and shifting every second, but
it has instead flavors and sounds that change at completely different
intervals of time than the front end fractal, so that the pulsating
rhythms of the fractal are represented on the back end as long
melodies and fragrant journeys.

Both the visual fractal and the olfactory musical follow some of the
same cues exactly and both of them diverge from each other completely
as well so that you cannot look at the fractal and find some graphic
mechanism that produces a song, and the existence of the song does not
mean that there is an invisible musicality pushing the pixels of the
fractal around, it's just that they are like the two ends of a bowtie;
one matter across space and the other experience through time. They
influence each other - sometimes intentionally, sometimes arbitrarily,
and sometimes in a conflicting or self defeating way.

Craig

Craig Weinberg

unread,
Oct 1, 2011, 2:35:17 PM10/1/11
to Everything List
On Oct 1, 6:14 am, Stathis Papaioannou <stath...@gmail.com> wrote:
Observable by who? It seems like a simple yes or no question to you
because you aren't willing or able to see the whole phenomena. If I
choose to think about something that makes me mad, I observe that I
feel angry, and I observe that neurons fire, ion channels open, etc at
the same time. The thoughts and anger they arouse are the observable
cause, but they cannot be observed with a microscope or fMRI. They are
observed by the person whose brain it is. This is the literal reality
of what is going on. If I put my hand on a hot stove, neurons fire,
ion channels open, and I feel burning pain through my skin. The cause
there is the heat of the stove.

Craig

Craig Weinberg

unread,
Oct 1, 2011, 3:05:39 PM10/1/11
to Everything List
I don't think that there are any arithmetical beings. It's a fantasy,
or really more of a presumption mistaking an narrow category of
understanding with a cosmic primitive.

> So I see a sort of racism against machine or numbers, justified by  
> unintelligible sentences.

I know that's what you see. I think that it is the shadow of your own
overconfidence in the theoretical-mechanistic perspective that you
project onto me.

>
>
>
> >>>> This is the kind of strong metaphysical and aristotleian assumption
> >>>> which I am not sure to see the need for, beyond extrapolating from
> >>>> our
> >>>> direct experience.
>
> >>> Is it better to extrapolate only from indirect experience?
>
> >> It is better to derive from clear assumptions.
>
> > Clear assumptions can be the most misleading kind.
>
> But that is the goal. Celar assumption leads to clear misleading,  
> which can then be corrected with respect to facts, or repeatable  
> experiments.
> Unclear assumptions lead to arbitrariness, racism, etc.

To me the goal is to reveal the truth, regardless of the nature of the
assumptions which are required to get there. If you a priori prejudice
the cosmos against figurative, multivalent phenomenology then you just
confirm your own bias.

>
>
>
>
>
>
>
>
>
>
>
> >>>> You have to assume mind, and a form of panpsychism, which seems  
> >>>> to me
> >>>> as much problematic than what it is supposed to explain or at least
> >>>> describe.
>
> >>> It wouldn't be panpsychism exactly, any more than neurochemistry is
> >>> panbrainism. The idea is that whatever sensorimotive experience  
> >>> taking
> >>> place at these microcosmic levels
>
> >> But now you have to define this, and explain where the microcosmos
> >> illusion comes from, or your theory is circular.
>
> > I don't think there is a microcosmos illusion, unless you are talking
> > about the current assumptions of the Standard Model as particles.
> > That's not an illusion though, just a specialized interpretation that
> > doesn't scale up to the macrocosm. As far as where sensorimotive
> > phenomena comes from, it precedes causality. 'Comes from' is a
> > sensorimotive proposition and not the other way around. The
> > singularity functions inherently as supremacy of orientation, and
> > sense and motive are energetic functions of the difference between it
> > and it's existential annihilation through time and space.
>
> That does not help.
>

That doesn't help me either.

>
>
>
>
>
>
>
>
>
> >>> is nothing like what we, as a
> >>> conscious collaboration of trillions of these things, can relate to.
> >>> It's more like protopsychism.
>
> >> ... and where that protopsychism come from, and what is it.
> >> Could you clearly separate your assumptions, and your reasoning (if
> >> there is any). I just try to understand.
>
> > Specifically, like if you have any two atoms, something must have a
> > sense of what is supposed to happen when they get close to each other.
> > Iron atoms have a particular way of relating that's different from
> > carbon atoms, and that relation can be quantified. That doesn't mean
> > that the relation is nothing but a quantitative skeleton. There is an
> > actual experience going on - an attraction, a repulsion, momentum,
> > acceleration...various states of holding, releasing, or binding a
> > 'charge'. What looks like a charge to us under a microscope is in fact
> > a proto-feeling with an associated range of proto-motivations.
>
> Why?
>

Because that's what we are made of.

>
>
>
>
>
>
>
>
>
>
> >>>> The link between both remains as unexplainable as before.
>
> >>> Mind would be a sensorimotive structure.
>
> >> A physical structure? A mathematical structure? A theological  
> >> structure?
>
> > No, a sensorimotive structure - which could encompass mathematical,
> > theological, or physical styles. It's an experience that plays out
> > over time and has participatory aspects. Some parts of the structure
> > are quite literal and map to muscle movements and discrete neural
> > pathways, and other ranges are lower frequency, broader, deeper, more
> > continuous and poetic non-structure. It's a much wider band than that
> > which is observable through physical instruments or computational
> > devices, but physical and computational aspects of the cosmos have
> > much precise and clear structures which exhaust our native ability to
> > process with mind-numbing repetition and detail.
>
> ?
> (I let you know that one of my main motivation consists in explaining  
> the physical, that is explaining it without using physical notions and  
> assumptions. The same for consciousness).

But what you are explaining it with is no more explainable than
physical notions or assumptions. Why explain what is real in terms
which are not real?

>
>
>
> >>> The link between the
> >>> sensorimotive and electromagnetic is the invariance between the two.
>
> >> ?
> > Feelings and action potentials have some phenomenological overlap.
>
> What is feeling, what is action, what is potential?

To ask what feeling is can only be sophistry. It is a primitive of
human subjectivity, and possibly universal subjectivity. To experience
directly, qualitatively, significantly. An action potential is an
electromagnetic spike train among neurons. They can be correlated to
instantiation of feelings.

>
> > That's the link. They both map to the same changes at the same place
> > and time, they just face opposite directions. Electromagnetism is
> > public front end, sensorimotive is private back end, which for us can
> > focus it's attention toward the front, back, or the link in between.
>
> ?

Electromagnetic and sensorimotive phenomena are opposite sides of the
same thing. I don't know how I could make it more clear.
Electromagnetism is public, generic, a-signifying, and sensorimotive
experience is private, proprietary and signifying.

>
>
>
>
>
>
>
>
>
>
>
> >>>> You attribute to me a metaphysical assumption, where I assume only
> >>>> what is taught in high school to everyone, + the idea that at some
> >>>> level matter (not primitive matter, but the matter we can observe
> >>>> when
> >>>> we look at our bodies) obeys deterministic laws, where you make  
> >>>> three
> >>>> metaphysical assumptions: matter, mind and a link which refer to
> >>>> notion that you don't succeed to define (like sensorimotive).
>
> >>>> Then you derive from this that the third person "I" is not Turing
> >>>> emulable, but this appears to be non justified too, even if we are
> >>>> willing to accept some meaning in those nanosensorimotive actions
> >>>> (which I am not, for I don't have a clue about what they can be).
>
> >>> The "I" is always first person.
>
> >> I don't think so. When I say that my child is hungry, I refer to a  
> >> 1-I
> >> in the third person way. That's empathy.
>
> > You still don't call your child 'I'. You're right that sensorimotive  
> > 1-
> > p is sharable, as long as you are sufficiently isomorphic to the other
> > entity.
>
> That makes sense, at least by replacing "sensorimotive" by "subjective".

Subjective is necessary but not sufficient to describe sensorimotive.
Sensorimotive is specific to actual sensory input and motive output.
You feel cold, so you choose to maybe put on a coat or turn the heater
on or just ignore it. Out of the many perceptions which make up your
awareness, the feeling of being cold has risen to the level of
conscious attention, and out of the many responses, impulses and
actions, we are motivated to choose one particular strategy to employ
first - even if it's a passive strategy of doing nothing. This push
and pull, receiving and sending of niche-specific, circumstantial
sensemaking is the essence of subjective content as opposed to a
categorization of the functional role of 'subjectivity'.
What if material is primitive?

Craig

Craig Weinberg

unread,
Oct 1, 2011, 3:58:31 PM10/1/11
to Everything List
On Oct 1, 11:01 am, Jason Resch <jasonre...@gmail.com> wrote:
If you rule out the sensitivity of the neuron, then you rule out the
neuron. It's like taking a baby and putting in a blender and saying
'see, there's no baby there anymore'. Don't you see that reducing
everything to particles is a meaningless exercise that makes
everything meaningless? You cannot on the one hand deny all levels of
organization above that of the atom (or are atoms too qualitative? is
it all just quarks, bosons, and leptons?) and then invoke
'collections' of particles. Collections to whom?

>
> > and it's ability to
> > respond with a specific physical change. All of those changes are
> > accompanied by qualitative experiences on that microcosmic level. Our
> > thoughts do not cause the ion channels to directly open or close any
> > more than a screen writer causes the pixels of your TV to get brighter
> > or dimmer, you are talking about two entirely different scales of
> > perception. Think of our thoughts and feelings as the 'back end' of
> > the total physical 'front end' activity of the brain.
>
> I would be more inclined to say they are the "top end" rather than the "back
> end",

I understand, but I'm saying that isn't correct. I think that thoughts
are not the inevitable result of a physical mechanism, that's not
possible. Experiences can only arise from more primitive experiences,
not from an inanimate object.

as thoughts are built on top of awareness of information,

Information is a metaphysical concept. It's not real. That's where you
are jumping from something which is meaningful but insubstantial
(thoughts, awareness) to something substantial but not meaningful.
Information is just giving a name to ignorance. It is a phantom having
neither meaning not substance, it's like phlogiston or soul - a
presumptive objectification of subjective experience. My hypothesis
presents an antidote to this error.

>which is
> built on top of brain behaviors and states, which is built on top of neuron
> behaviors, which is built on top of chemistry, which is built on top of the
> particle interactions of physics.  

I understand completely. That's what I used to think to. It doesn't
make sense though. It's like saying that a song is built on top of a
stereo system behaviors and states that is built on top of CD player
behaviors, which is built on top of laser technology and polymer
chemistry, which is built on top of photon interactions and electronic
computation.

>When you describe it as a "back end" it
> casts a mystical, unprobable and thus unscientific light on the idea, since
> that explanation ends with "there is no explanation".  

Just because it makes you uncomfortable doesn't mean it's not
accurate. My point is that front end - back end reflects the parity
and parallelism of the system. It is not a cascading bottom up
causality, it is a parallel coordination as well as a bi-directional
mutual causality.

>Worse, either this
> invisible back end is tinkering with the trajectories of particles (as in
> interactionist dualism) or it is just there, having no effect (as an
> epiphenomenon)

No, you just don't understand it because you are holding on to the
fantasy of a voyeuristic universe. I'm talking about the voice in your
head right now that you are listening to read this sentence. This
voice is reflected as electromagnetic modulation patterns in the brain
- changing voltages, action potentials, ion channels opening and
closing.. but those things are all utterly meaningless were it not for
their top level coherence as a voice in someone's mind. The voice is
indeed invisible, and it does indeed orchestrate brain activity from
within, but the trajectories of particles are determined by their own
causes and conditions - they do what they need to do, and they do what
the brain as a whole needs them to do. They are all part of the same
system.

> and leads to zombies and questions of its purpose.
> Alternately, you could adopt Liebniz's approach and say the front end and
> back end are independent realities which are, using your term, synergized.

They are partially independent but they are overlapping as well.

> But Liebniz's harmony leads to pure idealism, for the existence of minds is
> enough to explain all observations; there would be no need for a physical
> word to "force" our observations to agree with physical law.

My hypothesis does not lead to pure idealism at all. It is the
synthesis between the ideal and the concrete (opposite of ideal)
through sense. Subjects are influenced by physics from the bottom up,
physics is influenced by subjectivity (at least in the case of
voluntary control of our minds and bodies) from the top down. They do
not influence each other directly though - we do not command our
fingers to type by petitioning them intellectually to do so, we
actively induce them to move through our realized motivations. We
cannot directly bridge the lofty heights of our cognitive awareness
with the gross materiality of our body, we have to step it down to the
common neurological language of gesture - tension, relaxation, etc.

>
> > The back end
> > thoughts and feelings cannot be reduced to the front end activities of
> > neurons or ion channels, but they can be reduced to the back end
> > experiences of those neurons or ion channels - almost, except that
> > they synergize in a more significant way than front end phenomena can.
>
> > Think of it like a fractal vis if you want, where the large design is
> > always emerging from small designs, but imagine that the large design
> > and the small designs are both controlled by separate, but overlapping
> > intelligences so that sometimes the small forms change and propagate
> > to the larger picture and other times the largest picture changes and
> > all of the smaller images are consequently changed. Now imagine that
> > the entire fractal dynamic has an invisible, private backstage to it,
>
> Either this invisible and irreducible backstage can alter the direction or
> energy of particles (thus leading to observable physical differences and
> effects) or it cannot, making it an unnecessary epiphenomenon.  Which would
> you say it is?

The direction and energy of particles *is* the invisible and
irreducible backstage, just seen from the public front. All events and
energies are experiences of material substances. It's only confusing
because we are such an enormous compilation of energies and materials
and we are overwhelmed with the front end appearances of them as
molecules, cells, tissues, etc. We can't see the private, experiential
side of all of those microcosmic structures so we are confounded by
the necessity of jumping from the fact of our own monumentally
elaborated sensorimotive end product of subjective evolution to the
fact of an equally monumental but misleading universe of necessary but
not sufficient material phenomena.

Particles change their charge, polarize and depolarize, bind and
dissolve bonds as responses to local conditions which they alone are
directly sensitive to. Those changes are reflected in and reflections
of the global changes which are instantiated at the top level. Our
backstage process changes their backstage process and vice versa, but
you can't translate the backstage changes directly into the front end
appearances without knowing both ends of the translation. That's
because they are ontological opposites - material objects in space on
the front end and perceptual subjects through time on the back end.

Craig

John Mikes

unread,
Oct 1, 2011, 4:44:19 PM10/1/11
to everyth...@googlegroups.com
Dear Craig,
I went through most of your (unmarked) remarks and my mouse forced me (against my better judgement) to add some of my own.
I wll insert in blue - bold Italics.
John Mikes

*
We have a notion (human) to explain some aspects of phenomena received by epistemic enrichment over the millennia, I call it conventional sciences and it formulates "PARTICLES" (by aid of mathematics). We devise a 'model' of topics for our thinking populated by such figments and built upon it a technology that is 'almost' good (some mishaps...). So I agree with your outcry.
*
>
> > and it's ability to
> > respond with a specific physical change. All of those changes are
> > accompanied by qualitative experiences on that microcosmic level. Our
> > thoughts do not cause the ion channels to directly open or close any
> > more than a screen writer causes the pixels of your TV to get brighter
> > or dimmer, you are talking about two entirely different scales of
> > perception. Think of our thoughts and feelings as the 'back end' of
> > the total physical 'front end' activity of the brain.
>
> I would be more inclined to say they are the "top end" rather than the "back
> end",

I understand, but I'm saying that isn't correct. I think that thoughts
are not the inevitable result of a physical mechanism, that's not
possible. Experiences can only arise from more primitive experiences,
not from an inanimate object.
*
Amen. We know so little about our mentality... (if it is YOUR word: 
what would you identify with 'awareness' and 'information'? the latter
comes pretty well later on). 
*

 as thoughts are built on top of awareness of information,

Information is a metaphysical concept. It's not real. That's where you
are jumping from something which is meaningful but insubstantial
(thoughts, awareness) to something substantial but not meaningful.
Information is just giving a name to ignorance. It is a phantom having
neither meaning not substance, it's like phlogiston or soul - a
presumptive objectification of subjective experience. My hypothesis
presents an antidote to this error.

>which is
> built on top of brain behaviors and states, which is built on top of neuron
> behaviors, which is built on top of chemistry, which is built on top of the
> particle interactions of physics.  

I understand completely. That's what I used to think to. It doesn't
make sense though. It's like saying that a song is built on top of a
stereo system behaviors and states that is built on top of CD player
behaviors, which is built on top of laser technology and polymer
chemistry, which is built on top of photon interactions and electronic
computation.
*
Amen.
*
>When you describe it as a "back end" it
> casts a mystical, unprobable and thus unscientific light on the idea, since
> that explanation ends with "there is no explanation".  

Just because it makes you uncomfortable doesn't mean it's not
accurate. My point is that front end - back end reflects the parity
and parallelism of the system. It is not a cascading bottom up
causality, it is a parallel coordination as well as a bi-directional
mutual causality.
*
Add to it: the rest of the so far unknown everything that may also
initiate the observed little we know of. Causality as we speak about
it is a selection from the already known - rejecting the rest of the
world and the rest of the aspects - those we did not yet perceived.
*

>Worse, either this
> invisible back end is tinkering with the trajectories of particles (as in
> interactionist dualism) or it is just there, having no effect (as an
> epiphenomenon)

No, you just don't understand it because you are holding on to the
fantasy of a voyeuristic universe. I'm talking about the voice in your
head right now that you are listening to read this sentence. This
voice is reflected as electromagnetic modulation patterns in the brain
- changing voltages, action potentials, ion channels opening and
closing.. but those things are all utterly meaningless were it not for
their top level coherence as a voice in someone's mind. The voice is
indeed invisible, and it does indeed orchestrate brain activity from
within, but the trajectories of particles are determined by their own
causes and conditions - they do what they need to do, and they do what
the brain as a whole needs them to do. They are all part of the same
system.
*
Nice description, IMO missing the essence: the 'mentality' of which
this entire system (neuronl brainfunctions) is a TOOL for. The one
that thinks, remembers, is aware and erxperiences. 'Us' (self?)
*
*
The last 6-line sentence should be carved in gold. I would leave out
'microcosmic'  and think twice about  the 'material' figment, leading 
to a 'physical world' statistically re-evaluated every time when new
'information' has been received. Statistically means the arbitrary choice
of the boundaries, making the decisions a lie.  
*

 Particles change their charge, polarize and depolarize, bind and
dissolve bonds as responses to local conditions which they alone are
directly sensitive to. Those changes are reflected in and reflections
of the global changes which are instantiated at the top level. Our
backstage process changes their backstage process and vice versa, but
you can't translate the backstage changes directly into the front end
appearances without knowing both ends of the translation. That's
because they are ontological opposites - material objects in space on
the front end and perceptual subjects through time on the back end.
*
I can appreciate your struggle with the conventional words.

Craig
 
John M, (science) agnostic

Craig Weinberg

unread,
Oct 1, 2011, 6:14:57 PM10/1/11
to Everything List
On Oct 1, 4:44 pm, John Mikes <jami...@gmail.com> wrote:
> Dear Craig,
> I went through most of your (unmarked) remarks and my mouse forced me
> (against my better judgement) to add some of my own.
> I wll insert in blue - bold Italics.
> John Mikes

Thanks John. I appreciate your comments, although I will have to take
your word for their blueness and italicized format. I'm just read
reading this through the webpage because it seems like gmail messes up
the the formatting when I do replies. Do you have a good way of
reading/writing to this list in rich text?

> Just because it makes you uncomfortable doesn't mean it's not
> accurate. My point is that front end - back end reflects the parity
> and parallelism of the system. It is not a cascading bottom up
> causality, it is a parallel coordination as well as a bi-directional
> mutual causality.
> *
> *Add to it: the rest of the so far unknown everything that may also*
> *initiate the observed little we know of. Causality as we speak about *
> *it is a selection from the already known - rejecting the rest of the *
> *world and the rest of the aspects - those we did not yet perceived.*
> ***

Yes, it amazes me, given the history of science being built of
revolutionary ideas on the ruins of shattered paradigms that we are
still so quick to assume that 'this time, we must have it right'. It
is just too unthinkable that anything so fundamental as the relation
between subjectivity and material could be questioned. Even knowing as
we do about the hundreds of identifiable forms of cognitive bias which
our reasoning is subject to, we are still blind to their influence in
our own epistemological framework. I got into trouble here for calling
our reverse engineering approach to understanding consciousness, life,
and the cosmos 'forensic', but I think that it's an appropriate term.
By handling our inquiry into sentience like a criminal investigation,
we limit our evidence to what is no longer alive.

>
> >Worse, either this
> > invisible back end is tinkering with the trajectories of particles (as in
> > interactionist dualism) or it is just there, having no effect (as an
> > epiphenomenon)
>
> No, you just don't understand it because you are holding on to the
> fantasy of a voyeuristic universe. I'm talking about the voice in your
> head right now that you are listening to read this sentence. This
> voice is reflected as electromagnetic modulation patterns in the brain
> - changing voltages, action potentials, ion channels opening and
> closing.. but those things are all utterly meaningless were it not for
> their top level coherence as a voice in someone's mind. The voice is
> indeed invisible, and it does indeed orchestrate brain activity from
> within, but the trajectories of particles are determined by their own
> causes and conditions - they do what they need to do, and they do what
> the brain as a whole needs them to do. They are all part of the same
> system.
> *
> *Nice description, IMO missing the essence: the 'mentality' of which*
> *this entire system (neuronl brainfunctions) is a TOOL for. The one*
> *that thinks, remembers, is aware and erxperiences. 'Us' (self?)*
> ***
>

Yes, that seems to be an inconvenient state of affairs to explain.
What is the purpose of the brain if not to be of use to the person who
inhabits the body? Why would the brain need to invent 'us' to do what
it's already doing for it's own neurological and biological
evolutionary purposes?
> *The last 6-line sentence should be carved in gold. I would leave out*
> *'microcosmic'  and think twice about  the 'material' figment, leading *
> *to a 'physical world' statistically re-evaluated every time when new *
> *'information' has been received. Statistically means the arbitrary choice*
> *of the boundaries, making the decisions a lie.  *
> ***

I think I get what you are saying. I was talking specifically about
microcosmic phenomena here because Jason is intent on pursuing an
epistemology of devout bottom up determinism, but yes, of course it's
true on all levels of the cosmos what we cannot see the invisible
forest for the statistical trees, mistaking, in our contemporary
Occidentosis the existence of probability for the impossibility of
teleological insistence. When we subscribe to this worldview of
inevitability, we are deciding that we cannot decide anything,
choosing to believe that we cannot choose to believe. It is insanity
disguised as reason.

>  Particles change their charge, polarize and depolarize, bind and
> dissolve bonds as responses to local conditions which they alone are
> directly sensitive to. Those changes are reflected in and reflections
> of the global changes which are instantiated at the top level. Our
> backstage process changes their backstage process and vice versa, but
> you can't translate the backstage changes directly into the front end
> appearances without knowing both ends of the translation. That's
> because they are ontological opposites - material objects in space on
> the front end and perceptual subjects through time on the back end.
> *
> *I can appreciate your struggle with the conventional words. *
>

Thanks. Don't new ideas always bring with them new words?

Craig

Stathis Papaioannou

unread,
Oct 1, 2011, 8:52:41 PM10/1/11
to everyth...@googlegroups.com
On Sun, Oct 2, 2011 at 5:35 AM, Craig Weinberg <whats...@gmail.com> wrote:

>> I'm afraid the analogies you use don't help, at least for me. Does an
>> ion channel ever open in the absence of an observable cause? It's a
>> simple yes/no question. Whether consciousness is associated,
>> supervenient, linked, provided by God or whatever is a separate
>> question.
>
> Observable by who?

Observable by a third party.

It seems like a simple yes or no question to you
> because you aren't willing or able to see the whole phenomena. If I
> choose to think about something that makes me mad, I observe that I
> feel angry, and I observe that neurons fire, ion channels open, etc at
> the same time. The thoughts and anger they arouse are the observable
> cause, but they cannot be observed with a microscope or fMRI. They are
> observed by the person whose brain it is. This is the literal reality
> of what is going on. If I put my hand on a hot stove, neurons fire,
> ion channels open, and I feel burning pain through my skin. The cause
> there is the heat of the stove.


--
Stathis Papaioannou

Bruno Marchal

unread,
Oct 2, 2011, 5:01:08 AM10/2/11
to everyth...@googlegroups.com
On 01 Oct 2011, at 21:05, Craig Weinberg wrote:

On Oct 1, 10:13 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
On 01 Oct 2011, at 03:39, Craig Weinberg wrote:


The singularity is all the matter that there is, was, and will be, but
it has no exterior - no cracks made of space or time, it's all
interiority. It's feelings, images, experiences, expectations, dreams,
etc, and whatever countless other forms might exist in the cosmos. You
can use arithmetic to render an impersonation of feeling, as you can
write a song that feels arithmetic - but not all songs feel
arithmetic. You can write a poem about a color or you can write an
equation about visible electromagnetism, but neither completely
describe either color or electromagnetism.

I have no clue what you are taking about.
That your conclusion makes some arithmetical being looking like  
impersonal zombie is just racism for me.

I don't think that there are any arithmetical beings.

In which theory?



It's a fantasy,
or really more of a presumption mistaking an narrow category of
understanding with a cosmic primitive.

You miss the incompleteness discoveries. To believe that arithmetic is narrow just tell me something about you, not about arithmetic. It means that you have a pregodelian conception of arithmetic. We know today that arithmetic is beyond any conceivable effective axiomatizations. 




So I see a sort of racism against machine or numbers, justified by  
unintelligible sentences.

I know that's what you see. I think that it is the shadow of your own
overconfidence in the theoretical-mechanistic perspective that you
project onto me.

You are the one developing a philosophy making human with prosthetic brain less human, if not zombie.







This is the kind of strong metaphysical and aristotleian assumption
which I am not sure to see the need for, beyond extrapolating from
our
direct experience.

Is it better to extrapolate only from indirect experience?

It is better to derive from clear assumptions.

Clear assumptions can be the most misleading kind.

But that is the goal. Celar assumption leads to clear misleading,  
which can then be corrected with respect to facts, or repeatable  
experiments.
Unclear assumptions lead to arbitrariness, racism, etc.

To me the goal is to reveal the truth,

That is a personal goal. I don't think that truth can be revealed, only questioned.



regardless of the nature of the
assumptions which are required to get there. If you a priori prejudice
the cosmos against figurative, multivalent phenomenology then you just
confirm your own bias.

I don't hide this, and it is part of the scientific (modest) method. I assume comp, and I derive consequences in that frame. Everyone is free to use this for or against some world view.





I don't think there is a microcosmos illusion, unless you are talking
about the current assumptions of the Standard Model as particles.
That's not an illusion though, just a specialized interpretation that
doesn't scale up to the macrocosm. As far as where sensorimotive
phenomena comes from, it precedes causality. 'Comes from' is a
sensorimotive proposition and not the other way around. The
singularity functions inherently as supremacy of orientation, and
sense and motive are energetic functions of the difference between it
and it's existential annihilation through time and space.

That does not help.


That doesn't help me either.

I mean: I don't understand. To much precise terms in a field where we question the meaning of even simpler terms.



Specifically, like if you have any two atoms, something must have a
sense of what is supposed to happen when they get close to each other.
Iron atoms have a particular way of relating that's different from
carbon atoms, and that relation can be quantified. That doesn't mean
that the relation is nothing but a quantitative skeleton. There is an
actual experience going on - an attraction, a repulsion, momentum,
acceleration...various states of holding, releasing, or binding a
'charge'. What looks like a charge to us under a microscope is in fact
a proto-feeling with an associated range of proto-motivations.

Why?


Because that's what we are made of.

Why should I take your words for granted.


?
(I let you know that one of my main motivation consists in explaining  
the physical, that is explaining it without using physical notions and  
assumptions. The same for consciousness).

But what you are explaining it with is no more explainable than
physical notions or assumptions. Why explain what is real in terms
which are not real?

You are just begging the question. You talk like if you knew what is real or not. 
Now it is the fact that all scientist agree with simple facts like 1+9=10, etc. Actually they are using such facts already in their theories. I just show that IF we are machine, THEN those elementary facts are enough to explain the less elementary one.







The link between the
sensorimotive and electromagnetic is the invariance between the two.

?
Feelings and action potentials have some phenomenological overlap.

What is feeling, what is action, what is potential?

To ask what feeling is can only be sophistry.

Not when addressing issues in fundamental cognitive science. Niether matter nor consciousness should be taken as simple elementary notions.


It is a primitive of
human subjectivity, and possibly universal subjectivity. To experience
directly, qualitatively, significantly. An action potential is an
electromagnetic spike train among neurons. They can be correlated to
instantiation of feelings.

I agree with all this, but that has to be explained, not as taken for granted.




That's the link. They both map to the same changes at the same place
and time, they just face opposite directions. Electromagnetism is
public front end, sensorimotive is private back end, which for us can
focus it's attention toward the front, back, or the link in between.

?

Electromagnetic and sensorimotive phenomena are opposite sides of the
same thing. I don't know how I could make it more clear.

That is your main problem. 



Electromagnetism is public, generic, a-signifying, and sensorimotive
experience is private, proprietary and signifying.

That is like saying, in the machine language that electromagnestism if of type Bp, and sensori-motive is of type Bp & p, but I think that electromagnetism is of type Bp & Dt, and sensorimotive is of type of Bp & Dt & p.
A part of your intuition might be accessible to computer, making your dismissing the possibility of comp even more premature.
Hmm... The difference between subjective and sensorimotive would be captured by the difference between Bp & p, and Bp & Dt & p. That confirms my feeling described above.



I was curious about Hava Siegelmann's theories about analog
computation.

That's material phenomenon, and they can be used to perform some  
computations, but with digital mechanism, they can be recovered in the  
physical reality. They can't be primitive.

What if material is primitive?

Then comp is false. And you have to make this clear by assuming the relevant infinities. We would also be led to the peculiar situation where machine could correctly prove that they are not machine, making all possible discourses of machine being of the type Bf. You might eventually change my mind on the non provability of comp (as opposed to the non recognizability of the our level of comp). For this you should convince the machine that material is necessarily primitive. I begin to doubt that non-comp can make any sense. Hmm...

Bruno


Craig Weinberg

unread,
Oct 2, 2011, 7:58:38 AM10/2/11
to Everything List
On Oct 1, 8:52 pm, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Sun, Oct 2, 2011 at 5:35 AM, Craig Weinberg <whatsons...@gmail.com> wrote:
> >> I'm afraid the analogies you use don't help, at least for me. Does an
> >> ion channel ever open in the absence of an observable cause? It's a
> >> simple yes/no question. Whether consciousness is associated,
> >> supervenient, linked, provided by God or whatever is a separate
> >> question.
>
> > Observable by who?
>
> Observable by a third party.

Yes. If you only look at the ion channels of a neuron in someone's
amygdala without knowing what you are looking at or what the subject
is thinking about, then you are not going to know that the voltage is
changing because they are thinking of hitting a straight flush on the
river after going 'all in'. If you trace it back further, you will see
earlier voltage changes or depolarizations in cognitive areas, and
then in the audio sensory areas of the brain if the suggestion came as
a verbal instruction from a researcher. None of those changes in
different regions of the brain mean anything though except in
reference to the back end semantic understanding.

Craig

Craig

Craig Weinberg

unread,
Oct 2, 2011, 9:07:33 AM10/2/11
to Everything List
On Oct 2, 5:01 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 01 Oct 2011, at 21:05, Craig Weinberg wrote:
>
> > On Oct 1, 10:13 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
> >> On 01 Oct 2011, at 03:39, Craig Weinberg wrote:
>
> >>> The singularity is all the matter that there is, was, and will be,
> >>> but
> >>> it has no exterior - no cracks made of space or time, it's all
> >>> interiority. It's feelings, images, experiences, expectations,
> >>> dreams,
> >>> etc, and whatever countless other forms might exist in the cosmos.
> >>> You
> >>> can use arithmetic to render an impersonation of feeling, as you can
> >>> write a song that feels arithmetic - but not all songs feel
> >>> arithmetic. You can write a poem about a color or you can write an
> >>> equation about visible electromagnetism, but neither completely
> >>> describe either color or electromagnetism.
>
> >> I have no clue what you are taking about.
> >> That your conclusion makes some arithmetical being looking like
> >> impersonal zombie is just racism for me.
>
> > I don't think that there are any arithmetical beings.
>
> In which theory?

In reality.

>
> > It's a fantasy,
> > or really more of a presumption mistaking an narrow category of
> > understanding with a cosmic primitive.
>
> You miss the incompleteness discoveries. To believe that arithmetic is
> narrow just tell me something about you, not about arithmetic. It
> means that you have a pregodelian conception of arithmetic. We know
> today that arithmetic is beyond any conceivable effective
> axiomatizations.

I don't disagree with arithmetic being exactly what you say it is,
only that it cannot be realized except through sensorimotive
experience. Without that actualization - to be computed neurologically
or digitally in semiconductors, analogously in beer bottles, etc, then
there is only the idea of the existence of arithmetic, which also is a
sensorimotive experience or nothing at all. There is no arithmetic
'out there', it's only inside of matter.

So yes, arithmetic extends to the inconceivable and nonaxiomatizable
but the sensorimotive gestalts underlying arithmetic are much more
inconceivable and nonaxiomatizable. A greater infinity.

>
>
>
> >> So I see a sort of racism against machine or numbers, justified by
> >> unintelligible sentences.
>
> > I know that's what you see. I think that it is the shadow of your own
> > overconfidence in the theoretical-mechanistic perspective that you
> > project onto me.
>
> You are the one developing a philosophy making human with prosthetic
> brain less human, if not zombie.

I'm not against a prosthetic brain, I just think that it's going to
have to be made of some kind of cells that live and die, which may
mean that it has to be organic, which may mean that it has to be based
on nucleic acids. Your theory would conclude that we should see
naturally evolved brains made out of a variety of materials not based
on living cells if we look long enough. I don't think that is
necessarily the case.

>
>
>
> >>>>>> This is the kind of strong metaphysical and aristotleian
> >>>>>> assumption
> >>>>>> which I am not sure to see the need for, beyond extrapolating
> >>>>>> from
> >>>>>> our
> >>>>>> direct experience.
>
> >>>>> Is it better to extrapolate only from indirect experience?
>
> >>>> It is better to derive from clear assumptions.
>
> >>> Clear assumptions can be the most misleading kind.
>
> >> But that is the goal. Celar assumption leads to clear misleading,
> >> which can then be corrected with respect to facts, or repeatable
> >> experiments.
> >> Unclear assumptions lead to arbitrariness, racism, etc.
>
> > To me the goal is to reveal the truth,
>
> That is a personal goal. I don't think that truth can be revealed,
> only questioned.

How can you question it if it is not revealed?

>
> > regardless of the nature of the
> > assumptions which are required to get there. If you a priori prejudice
> > the cosmos against figurative, multivalent phenomenology then you just
> > confirm your own bias.
>
> I don't hide this, and it is part of the scientific (modest) method. I
> assume comp, and I derive consequences in that frame. Everyone is free
> to use this for or against some world view.
>

It's a good method for so many things, but not everything, and I'm
only interested in solving everything.

>
>
> >>> I don't think there is a microcosmos illusion, unless you are
> >>> talking
> >>> about the current assumptions of the Standard Model as particles.
> >>> That's not an illusion though, just a specialized interpretation
> >>> that
> >>> doesn't scale up to the macrocosm. As far as where sensorimotive
> >>> phenomena comes from, it precedes causality. 'Comes from' is a
> >>> sensorimotive proposition and not the other way around. The
> >>> singularity functions inherently as supremacy of orientation, and
> >>> sense and motive are energetic functions of the difference between
> >>> it
> >>> and it's existential annihilation through time and space.
>
> >> That does not help.
>
> > That doesn't help me either.
>
> I mean: I don't understand. To much precise terms in a field where we
> question the meaning of even simpler terms.

I have precise terms because I have a precise understanding of what I
mean. I'm saying that causality is an epiphenomena of a feeling of
succession, which is a specific category of the sensorimotive palette,
like pain or blue. All of these feelings and experiences are generated
by the underlying dynamic of the singularity chasing it's tail through
the relatively fictional expansion of timespace.

>
>
>
> >>> Specifically, like if you have any two atoms, something must have a
> >>> sense of what is supposed to happen when they get close to each
> >>> other.
> >>> Iron atoms have a particular way of relating that's different from
> >>> carbon atoms, and that relation can be quantified. That doesn't mean
> >>> that the relation is nothing but a quantitative skeleton. There is
> >>> an
> >>> actual experience going on - an attraction, a repulsion, momentum,
> >>> acceleration...various states of holding, releasing, or binding a
> >>> 'charge'. What looks like a charge to us under a microscope is in
> >>> fact
> >>> a proto-feeling with an associated range of proto-motivations.
>
> >> Why?
>
> > Because that's what we are made of.
>
> Why should I take your words for granted.

You don't have to. You should check it out for yourself and see if it
makes sense, and if not, why not?

>
>
>
> >> ?
> >> (I let you know that one of my main motivation consists in explaining
> >> the physical, that is explaining it without using physical notions
> >> and
> >> assumptions. The same for consciousness).
>
> > But what you are explaining it with is no more explainable than
> > physical notions or assumptions. Why explain what is real in terms
> > which are not real?
>
> You are just begging the question. You talk like if you knew what is
> real or not.

I know that consciousness is real, and my consciousness through my
body tells me that matter is real. My consciousness also tells me that
some of it's own contents do not matter and it's perceptions do not
faithfully render what is real outside of my awareness. I would say
that arithmetic truths matter but they are not real, and therefore
cannot be manifested in a vacuum - only through some material object
which can accomodate the corresponding sensorimotive experiences. You
can't write a program that runs on a computer made of only liquid of
vapor - you need solid structures to accomodate fixed arithmetic
truths. You need the right kinds of matter to express arithmetic
truths, but matter does not need arithmetic to experience it's own
being.

> Now it is the fact that all scientist agree with simple facts like
> 1+9=10, etc. Actually they are using such facts already in their
> theories. I just show that IF we are machine, THEN those elementary
> facts are enough to explain the less elementary one.

But since we aren't only a machine, then it's a dead end. It's
circular reasoning because you can say we can't prove we're not
machines, but the whole idea of 'proving' is mechanical so you are
just magnifying the implicit prejudice and getting further from the
non-mechanistic truths of awareness.

>
>
>
> >>>>> The link between the
> >>>>> sensorimotive and electromagnetic is the invariance between the
> >>>>> two.
>
> >>>> ?
> >>> Feelings and action potentials have some phenomenological overlap.
>
> >> What is feeling, what is action, what is potential?
>
> > To ask what feeling is can only be sophistry.
>
> Not when addressing issues in fundamental cognitive science. Niether
> matter nor consciousness should be taken as simple elementary notions.

But numbers should be taken as elementary notions? That's the problem,
you are trying to explain awareness as an epiphenomenon of cognitive
science, when of course cognition arises from feeling (otherwise
babies would come out of the womb solving math equations instead of
crying, and civilizations should evolve binary codes before
ideographic alphabets and cave paintings).

>
> > It is a primitive of
> > human subjectivity, and possibly universal subjectivity. To experience
> > directly, qualitatively, significantly. An action potential is an
> > electromagnetic spike train among neurons. They can be correlated to
> > instantiation of feelings.
>
> I agree with all this, but that has to be explained, not as taken for
> granted.

How can any primitive be explained? If explanation is to reduce to
simpler known phenomena, and primitive is to be the simplest knowable
phenomena, then it's a contradiction to explain it any further. We can
only place it into a meaningful context, which I think my hypothesis
does.

>
>
>
> >>> That's the link. They both map to the same changes at the same place
> >>> and time, they just face opposite directions. Electromagnetism is
> >>> public front end, sensorimotive is private back end, which for us
> >>> can
> >>> focus it's attention toward the front, back, or the link in between.
>
> >> ?
>
> > Electromagnetic and sensorimotive phenomena are opposite sides of the
> > same thing. I don't know how I could make it more clear.
>
> That is your main problem.

Ok, but what isn't clear? Opposite? 'same thing'? Electromagentism?
Sensorimotive?

Electromagnetism is the name we give to the various phenomena of
matter across space - waving, attracting, repulsing, moving,
intensifying, discharging, radiating, accumulating density, surfaces,
depth, consistency, etc. Sensorimotivation is the name I'm giving to
the various phenomena of experience (energy) through time - detecting,
sensing, feeling, being, doing, intention, image, emotion, thought,
meaning, symbol, archetype, metaphor, semiotics, communication,
arithmetic, etc.

>
> > Electromagnetism is public, generic, a-signifying, and sensorimotive
> > experience is private, proprietary and signifying.
>
> That is like saying, in the machine language that electromagnestism if
> of type Bp, and sensori-motive is of type Bp & p, but I think that
> electromagnetism is of type Bp & Dt, and sensorimotive is of type of
> Bp & Dt & p.
> A part of your intuition might be accessible to computer, making your
> dismissing the possibility of comp even more premature.

What's Dt?

I think I know what Bp and p are but maybe define them longhand so I
can be sure.
I'll get back to you if you can explain the variables better. I tried
Googling them but nothing clear comes up for me.

>
> >>> I was curious about Hava Siegelmann's theories about analog
> >>> computation.
>
> >> That's material phenomenon, and they can be used to perform some
> >> computations, but with digital mechanism, they can be recovered in
> >> the
> >> physical reality. They can't be primitive.
>
> > What if material is primitive?
>
> Then comp is false. And you have to make this clear by assuming the
> relevant infinities.

What has to be infinite in order for comp to be false, and isn't comp
already assuming that arithmetic is non-axiomatizable and therefore
infinite?

>We would also be led to the peculiar situation
> where machine could correctly prove that they are not machine,

I don't see how matter as a primitive makes machines able to prove
that they are not machines. I think a machine machine (or something we
presume is a machine) proves whether of not it is a machine by how it
responds to errors or hardware failures. You could maybe say that what
we are made of is an accumulation of the universe's favorite errors,
failures, and aberrations.

>making
> all possible discourses of machine being of the type Bf. You might
> eventually change my mind on the non provability of comp (as opposed
> to the non recognizability of the our level of comp). For this you
> should convince the machine that material is necessarily primitive. I
> begin to doubt that non-comp can make any sense. Hmm...

If I pull the plug on the machine, then the machine halts. Why should
that be the case were machine independent of material substrate?

Craig

Stathis Papaioannou

unread,
Oct 2, 2011, 9:28:35 AM10/2/11
to everyth...@googlegroups.com

So you do believe that ion channels will open without an observable
cause, since thoughts are not an observable cause. A neuroscientist
would see neurons firing apparently for no reason, violating physical
laws.


--
Stathis Papaioannou

Craig Weinberg

unread,
Oct 2, 2011, 1:14:00 PM10/2/11
to Everything List
On Oct 2, 9:28 am, Stathis Papaioannou <stath...@gmail.com> wrote:

>
> So you do believe that ion channels will open without an observable
> cause, since thoughts are not an observable cause. A neuroscientist
> would see neurons firing apparently for no reason, violating physical
> laws.

Thoughts are observable to the thinker. No physical laws are violated.
When a person thinks of gambling, the associated neurons fire for that
reason. The firings have a proximate cause - changes in voltage or
polarity, etc, but those phenomena also are activated because the
person who they are part of thinks of gambling. Both the thought and
the mechanism are part of the same thing, a thing which has it's only
existence as the dualistic relation between the two.

If you stimulate the amygdala in a gambler directly with
electromagnetically charged instruments, then they will likely be
reminded of the feeling of gambling. If you stimulate the area in
someone who has never gambled, the would be reminded instead of the
feeling of jumping across a creek or lying to their teacher or
something. It is bi-directional. I don't understand why that would be
such a difficult concept to consider. I can put a magnet onto the
screen of a CRT and cause it to change colors, or rub my eyes with my
hands to see colors and patterns but that doesn't mean that they have
to be manually manipulated that way to function.

Craig

meekerdb

unread,
Oct 2, 2011, 7:00:19 PM10/2/11
to everyth...@googlegroups.com
On 10/2/2011 10:14 AM, Craig Weinberg wrote:
> On Oct 2, 9:28 am, Stathis Papaioannou<stath...@gmail.com> wrote:
>
>> So you do believe that ion channels will open without an observable
>> cause, since thoughts are not an observable cause. A neuroscientist
>> would see neurons firing apparently for no reason, violating physical
>> laws.
> Thoughts are observable to the thinker. No physical laws are violated.
> When a person thinks of gambling, the associated neurons fire for that
> reason. The firings have a proximate cause - changes in voltage or
> polarity, etc, but those phenomena also are activated because the
> person who they are part of thinks of gambling. Both the thought and
> the mechanism are part of the same thing, a thing which has it's only
> existence as the dualistic relation between the two.

If they are part of the same thing, then it is presumptuous to say one causes the other.
One might at well say the neurons firing caused the thought of gambling - and in fact that
is what Stathis is saying and for the very good reason that a little electrical
stimulation, that has no "thought" or "sensorimotive" correlate, can cause both neurons
firing AND their correlated thoughts. But thoughts cannot cause the electrical stimulator
to fire. So it is *not* bidirectional.

Brent

Craig Weinberg

unread,
Oct 2, 2011, 7:08:10 PM10/2/11
to Everything List
On Oct 2, 7:00 pm, meekerdb <meeke...@verizon.net> wrote:
> On 10/2/2011 10:14 AM, Craig Weinberg wrote:
>
> > On Oct 2, 9:28 am, Stathis Papaioannou<stath...@gmail.com>  wrote:
>
> >> So you do believe that ion channels will open without an observable
> >> cause, since thoughts are not an observable cause. A neuroscientist
> >> would see neurons firing apparently for no reason, violating physical
> >> laws.
> > Thoughts are observable to the thinker. No physical laws are violated.
> > When a person thinks of gambling, the associated neurons fire for that
> > reason. The firings have a proximate cause - changes in voltage or
> > polarity, etc, but those phenomena also are activated because the
> > person who they are part of thinks of gambling. Both the thought and
> > the mechanism are part of the same thing, a thing which has it's only
> > existence as the dualistic relation between the two.
>
> If they are part of the same thing, then it is presumptuous to say one causes the other.  
> One might at well say the neurons firing caused the thought of gambling - and in fact that
> is what Stathis is saying and for the very good reason that a little electrical
> stimulation, that has no "thought" or "sensorimotive" correlate, can cause both neurons
> firing AND their correlated thoughts.  But thoughts cannot cause the electrical stimulator
> to fire.  So it is *not* bidirectional.
>

What do you mean? Thoughts *do* cause an electrical detector to fire.
That's what an MRI shows. You could use any kind of electrical probe
or sensor instead as long as it is sufficiently sensitive to detect
the ordinary firing of a neuron. That's how it's possible to have
thought-driven computers.
http://www.pcworld.com/article/129889/scientists_show_thoughtcontrolled_computer_at_cebit.html

Craig

Stathis Papaioannou

unread,
Oct 3, 2011, 8:29:14 AM10/3/11
to everyth...@googlegroups.com

The device cited picks up electrical impulses from the scalp. The
electrical activity comes from the neurons firing in the brain. These
neurons may have associated thoughts when they fire but this is not
obvious to an external observer: all that is obvious is that a
particular neuron fires because of various measurable factors such as
its resting membrane potential and the neurotransmitter released by
other neurons with which it interfaces. So to an external observer,
every neural event has an observable cause, generally other neural
events. This means the externally observable behaviour of the brain is
computable, even though the external observer may not know that the
brain is conscious. On the other hand, if the external observer does
not know about neurotransmitters and receptors he will not be able to
explain why the neurons fire - it will look to him as if they fire for
no reason. The mental is supervenient on the physical, but the mental
cannot as a separate entity move the physical. If it could, we would
observe neurons breaking physical laws.


--
Stathis Papaioannou

Bruno Marchal

unread,
Oct 3, 2011, 11:16:00 AM10/3/11
to everyth...@googlegroups.com
On 02 Oct 2011, at 15:07, Craig Weinberg wrote:

On Oct 2, 5:01 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
On 01 Oct 2011, at 21:05, Craig Weinberg wrote:

On Oct 1, 10:13 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
On 01 Oct 2011, at 03:39, Craig Weinberg wrote:

The singularity is all the matter that there is, was, and will be,
but
it has no exterior - no cracks made of space or time, it's all
interiority. It's feelings, images, experiences, expectations,
dreams,
etc, and whatever countless other forms might exist in the cosmos.
You
can use arithmetic to render an impersonation of feeling, as you can
write a song that feels arithmetic - but not all songs feel
arithmetic. You can write a poem about a color or you can write an
equation about visible electromagnetism, but neither completely
describe either color or electromagnetism.

I have no clue what you are taking about.
That your conclusion makes some arithmetical being looking like
impersonal zombie is just racism for me.

I don't think that there are any arithmetical beings.

In which theory?

In reality.

That type of assertion is equivalent with "because God say so".
Reality is what we try to figure out.
If you know for sure what reality is, then I can do nothing, except perhaps invite you to cultivate more the modest doubting attitude.





It's a fantasy,
or really more of a presumption mistaking an narrow category of
understanding with a cosmic primitive.

You miss the incompleteness discoveries. To believe that arithmetic is
narrow just tell me something about you, not about arithmetic. It
means that you have a pregodelian conception of arithmetic. We know
today that arithmetic is beyond any conceivable effective
axiomatizations.

I don't disagree with arithmetic being exactly what you say it is,
only that it cannot be realized except through sensorimotive
experience. Without that actualization - to be computed neurologically
or digitally in semiconductors, analogously in beer bottles, etc, then
there is only the idea of the existence of arithmetic, which also is a
sensorimotive experience or nothing at all. There is no arithmetic
'out there', it's only inside of matter.

This makes sense with the non-comp theory (which you have not yet presented to us).
In the comp theory, arithmetic is independent of anything, and matter is only a perception inside arithmetic.




So yes, arithmetic extends to the inconceivable and nonaxiomatizable
but the sensorimotive gestalts underlying arithmetic are much more
inconceivable and nonaxiomatizable. A greater infinity.

Inside arithmetic *is* a bigger infinity than arithmetic. It is not even nameable.







So I see a sort of racism against machine or numbers, justified by
unintelligible sentences.

I know that's what you see. I think that it is the shadow of your own
overconfidence in the theoretical-mechanistic perspective that you
project onto me.

You are the one developing a philosophy making human with prosthetic
brain less human, if not zombie.

I'm not against a prosthetic brain, I just think that it's going to
have to be made of some kind of cells that live and die, which may
mean that it has to be organic, which may mean that it has to be based
on nucleic acids.

Replace in the quote just above "prothetic brain" by " silicon prosthetic brain".



Your theory would conclude that we should see
naturally evolved brains made out of a variety of materials not based
on living cells if we look long enough. I don't think that is
necessarily the case.

The theory says that it is *possibly* the case, and the advent of computers show it to be the case right now. The difference between artificial and natural is ... artificial.







This is the kind of strong metaphysical and aristotleian
assumption
which I am not sure to see the need for, beyond extrapolating
from
our
direct experience.

Is it better to extrapolate only from indirect experience?

It is better to derive from clear assumptions.

Clear assumptions can be the most misleading kind.

But that is the goal. Celar assumption leads to clear misleading,
which can then be corrected with respect to facts, or repeatable
experiments.
Unclear assumptions lead to arbitrariness, racism, etc.

To me the goal is to reveal the truth,

That is a personal goal. I don't think that truth can be revealed,
only questioned.

How can you question it if it is not revealed?

It can be suggested, like in dreams.





regardless of the nature of the
assumptions which are required to get there. If you a priori prejudice
the cosmos against figurative, multivalent phenomenology then you just
confirm your own bias.

I don't hide this, and it is part of the scientific (modest) method. I
assume comp, and I derive consequences in that frame. Everyone is free
to use this for or against some world view.


It's a good method for so many things, but not everything, and I'm
only interested in solving everything.

You might end up with a theory of everything that you will not been able to communicate. You might have fans and disciples (and even money) but not students and researchers correcting and extending your work.






I don't think there is a microcosmos illusion, unless you are
talking
about the current assumptions of the Standard Model as particles.
That's not an illusion though, just a specialized interpretation
that
doesn't scale up to the macrocosm. As far as where sensorimotive
phenomena comes from, it precedes causality. 'Comes from' is a
sensorimotive proposition and not the other way around. The
singularity functions inherently as supremacy of orientation, and
sense and motive are energetic functions of the difference between
it
and it's existential annihilation through time and space.

That does not help.

That doesn't help me either.

I mean: I don't understand. To much precise terms in a field where we
question the meaning of even simpler terms.

I have precise terms because I have a precise understanding of what I
mean.

To be franc, I don't thing you do have them. I don't take for granted most 'familiar' words of natural language, especially bases on physicalist conception of reality.



I'm saying that causality is an epiphenomena of a feeling of
succession, which is a specific category of the sensorimotive palette,
like pain or blue.

I can understand this ... by interpreting this in the comp theory, making your terms precise (indeed they become numbers, or numbers relation, or higher order numbers relations). 
That is why sometimes I can appreciate your intuition: you talk like a universal (Löbian) ... machine. But then you are using what you say as a critics of mechanism, where the universal machine appears as a simple counterexample.



All of these feelings and experiences are generated
by the underlying dynamic of the singularity chasing it's tail through
the relatively fictional expansion of timespace.

I have no clue what you mean by time, space, relatively fictional, dynamic, generated, experiences, feelings ... in your "theory".






Specifically, like if you have any two atoms, something must have a
sense of what is supposed to happen when they get close to each
other.
Iron atoms have a particular way of relating that's different from
carbon atoms, and that relation can be quantified. That doesn't mean
that the relation is nothing but a quantitative skeleton. There is
an
actual experience going on - an attraction, a repulsion, momentum,
acceleration...various states of holding, releasing, or binding a
'charge'. What looks like a charge to us under a microscope is in
fact
a proto-feeling with an associated range of proto-motivations.

Why?

Because that's what we are made of.

Why should I take your words for granted.

You don't have to. You should check it out for yourself and see if it
makes sense, and if not, why not?

My attraction to comp, is that it explains to me why the concept of primary matter does not make sense. In fact the more general notion of "being made of" does not make sense to me (even if it makes sense for some universal machine). 







?
(I let you know that one of my main motivation consists in explaining
the physical, that is explaining it without using physical notions
and
assumptions. The same for consciousness).

But what you are explaining it with is no more explainable than
physical notions or assumptions. Why explain what is real in terms
which are not real?

You are just begging the question. You talk like if you knew what is
real or not.

I know that consciousness is real,

Good. My oldest opponents were disagreeing on this point (a critics which does not make much sense).



and my consciousness through my
body tells me that matter is real.

Matter is real. I do agree with this. But matter, assuming comp, is not something made of elementary material things. Matter, to be short, is the border of the universal mind, as seen by the universal mind. It is a real perception of something which is not primarily material, but sum up infinities of computations. An instructive image, is the border of the Mandelbrot set.



My consciousness also tells me that
some of it's own contents do not matter and it's perceptions do not
faithfully render what is real outside of my awareness. I would say
that arithmetic truths matter but they are not real, and therefore
cannot be manifested in a vacuum - only through some material object
which can accomodate the corresponding sensorimotive experiences. You
can't write a program that runs on a computer made of only liquid of
vapor - you need solid structures to accomodate fixed arithmetic
truths. You need the right kinds of matter to express arithmetic
truths, but matter does not need arithmetic to experience it's own
being.

Not necessarily. You have to give an argument, and there are many results which can explain to you how such argument have to be very sophisticated. Apparently, in arithmetic, numbers does dream coherently (in a first person sharable way) of a stable quantum reality, with some symmetries at the bottom, and wavy like interferences.




Now it is the fact that all scientist agree with simple facts like
1+9=10, etc. Actually they are using such facts already in their
theories. I just show that IF we are machine, THEN those elementary
facts are enough to explain the less elementary one.

But since we aren't only a machine, then it's a dead end. 

You should say:  "but since in my theory I am assuming that we are not machine, it is a dead end in my theory".



It's
circular reasoning because you can say we can't prove we're not
machines,

I say the exact opposite. We can prove that we are not machine (in case we are not machine). If we are (consistent) machine, then we cannot prove it. 


but the whole idea of 'proving' is mechanical so you are
just magnifying the implicit prejudice and getting further from the
non-mechanistic truths of awareness.

The human activity of proving is not mechanical(*), but a gentle polite proof should be mechanically checkable. You can't say to the peer reviewers that for the proposition 13 they have to pray God or smoke salvia divinorum. (Or you say it only at the pause cafe, and this is for private concerns, not for the publication, unless it is paper on salvia or God, but then the goal is no more to prove but to suggest a possible empirical discovery).

(*) assuming P ≠ NP. 





The link between the
sensorimotive and electromagnetic is the invariance between the
two.

?
Feelings and action potentials have some phenomenological overlap.

What is feeling, what is action, what is potential?

To ask what feeling is can only be sophistry.

Not when addressing issues in fundamental cognitive science. Niether
matter nor consciousness should be taken as simple elementary notions.

But numbers should be taken as elementary notions?

In the usual mathematical sense. No need of extra metaphysical assumption. You just need to believe sentences like "prime numbers exists". 
All the material science use this. Despite the claims of some philosophers, we just cannot do science without assuming the independence of the truth of elementary (first order) arithmetical relations. 



That's the problem,
you are trying to explain awareness as an epiphenomenon

Awareness is not an epiphenomenon at all. It is a real non illusional epistemological phenomenon which is responsible (in some logico-arithmetical sense) the rise of physical reality.

It is: NUMBERS ==> CONSCIOUSNESS/DREAMS ==> SHARABLE DREAMS (physical realities).




of cognitive
science, when of course cognition arises from feeling (otherwise
babies would come out of the womb solving math equations instead of
crying, and civilizations should evolve binary codes before
ideographic alphabets and cave paintings).

I agree that cognition arise from feelings. 





It is a primitive of
human subjectivity, and possibly universal subjectivity. To experience
directly, qualitatively, significantly. An action potential is an
electromagnetic spike train among neurons. They can be correlated to
instantiation of feelings.

I agree with all this, but that has to be explained, not as taken for
granted.

How can any primitive be explained?

It can't, by definition. That is why I don't take matter and consciousness as primitive, given that we can explain them from numbers (and their laws). The contrary is false. We cannot explain numbers by matter or consciousness. It can be proved that numbers cannot be explained at all. In that sense, they are provably necessarily primitive.



If explanation is to reduce to
simpler known phenomena, and primitive is to be the simplest knowable
phenomena, then it's a contradiction to explain it any further. We can
only place it into a meaningful context, which I think my hypothesis
does.




That's the link. They both map to the same changes at the same place
and time, they just face opposite directions. Electromagnetism is
public front end, sensorimotive is private back end, which for us
can
focus it's attention toward the front, back, or the link in between.

?

Electromagnetic and sensorimotive phenomena are opposite sides of the
same thing. I don't know how I could make it more clear.

That is your main problem.

Ok, but what isn't clear? Opposite? 'same thing'? Electromagentism?
Sensorimotive?

Yes, all that.



Electromagnetism is the name we give to the various phenomena of
matter across space - waving, attracting, repulsing, moving,
intensifying, discharging, radiating, accumulating density, surfaces,
depth, consistency, etc. Sensorimotivation is the name I'm giving to
the various phenomena of experience (energy) through time - detecting,
sensing, feeling, being, doing, intention, image, emotion, thought,
meaning, symbol, archetype, metaphor, semiotics, communication,
arithmetic, etc.

That's what the numbers can explain, and that what cannot explain the numbers (without assuming them implicitly).





Electromagnetism is public, generic, a-signifying, and sensorimotive
experience is private, proprietary and signifying.

That is like saying, in the machine language that electromagnestism if
of type Bp, and sensori-motive is of type Bp & p, but I think that
electromagnetism is of type Bp & Dt, and sensorimotive is of type of
Bp & Dt & p.
A part of your intuition might be accessible to computer, making your
dismissing the possibility of comp even more premature.

What's Dt?

I think I know what Bp and p are but maybe define them longhand so I
can be sure.

I fix some machine M.
p is for an (arbitrary) arithmetical proposition.
Bp is for "M proves p", written in the language used by the machine (it can be Peano arithmetic, in which case Bp is the beweisbar ('p') of Gödel, and 'p' is Gödel's number of p (that is a coding of a description of p in the language of the machine).
The modality "B" obeys the modal logics G (for what the machine can proof about its provability) and G* (for what is true about that provability, but that the machine cannot necessarily prove. G is a sublogic of G*.

Dp is for ~B~p (= it is not provable by M that not p),  (~  =   not). If Dp is true, the machine cannot prove ~p, and this means that p is consistent for the machine. For example (with t = the constant truth, or = "1+1=2", and f = 1+1= 3, or the constant false):

Dt = ~B~t = ~Bf = "M is consistent" (written in the language of the machine).

Gödel's second incompleteness theorem is 

Dt -> ~BDt    (If M is consistent then M cannot prove that M is consistent).   Note that M can prove this.

Note that ~Bp is equivalent with D~p
Note that ~Dp is equivalent with B~p
~D~p is equivalent with Bp.

B and D are dual. In modal logic B is called the Box, and D is called the diamond. People use also [] and <> for them.

There are infinitely many different modal logics




Hmm... The difference between subjective and sensorimotive would be
captured by the difference between Bp & p, and Bp & Dt & p. That
confirms my feeling described above.


I'll get back to you if you can explain the variables better. I tried
Googling them but nothing clear comes up for me.

I hope that what I wrote above helps a bit. There are good book on the subject, but you need to follow some course in mathematical logic, to get familiar with it.





I was curious about Hava Siegelmann's theories about analog
computation.

That's material phenomenon, and they can be used to perform some
computations, but with digital mechanism, they can be recovered in
the
physical reality. They can't be primitive.

What if material is primitive?

Then comp is false. And you have to make this clear by assuming the
relevant infinities.

What has to be infinite in order for comp to be false, and isn't comp
already assuming that arithmetic is non-axiomatizable and therefore
infinite?

The fact that arithmetic (in the sense of arithmetical truth, or the set of all arithmetical true sentences) is not axiomatizable is a theorem in pure mathematics. It is independent of comp. But that fact can be used to explain why comp is not a reductionism.

For comp to be false you need an argument for saying "NO" to the digital doctor. He can propose you different artificial brains:
The cheaper one: the neuron are artificial device counting spikes and triggering in response to being trigged according to some theory of neurons and brains.
More expensive: it simulates by artificial devices each molecule of your brain.
Much more expensive: it simulates by artificial devices the quantum sates of each elementary particle;waves playing a role in your brain.
Even more expensive: it simulates by artificial devices the quantum states of each elementary particles:waves in your brain by keeping the entanglement of those particles/waves with your neighborhood.
etc.

To say no to ALL doctor's propositions, you are implicitly telling him that none of those finite description will work, nor even the quantum state of the whole multiverse.

You are asking him for either something magical (non Turing emulable), or something infinite.




We would also be led to the peculiar situation
where machine could correctly prove that they are not machine,

I don't see how matter as a primitive makes machines able to prove
that they are not machines.

I was unclear.  What I say is that if a machine convince herself, with your help perhaps, that some pirimitive matter exists and has a role for the instantiation of her consciousness, then such a machine will eventually conclude (by a way similar to UDA) that she is not a machine. If such machine is ideally correct, she would conclude correctly that she is not a machine. This comes from the fact that the UDA reasonning can be done by machines (as AUDA illustrated in some admittedly abstract way). You might intuit this if you take the time to follow the UD argument.



I think a machine machine (or something we
presume is a machine) proves whether of not it is a machine by how it
responds to errors or hardware failures.

A machine can never prove, still less know, that she is a machine. Even machine have to make a leap of faith to admit mechanism. Most machines will be 'naturally' against comp, before introspecting deeper, and reasoning deeper, so that they can infer the possibility (but nothing more).


You could maybe say that what
we are made of is an accumulation of the universe's favorite errors,
failures, and aberrations.

Partially, yes. Even partial lies. Perhaps. I'm not sure.




making
all possible discourses of machine being of the type Bf. You might
eventually change my mind on the non provability of comp (as opposed
to the non recognizability of the our level of comp). For this you
should convince the machine that material is necessarily primitive. I
begin to doubt that non-comp can make any sense. Hmm...

If I pull the plug on the machine, then the machine halts. Why should
that be the case were machine independent of material substrate?

Because machines can have long and complex computational histories. 
If you pull the plug on the machine, you act on her 3-body that she share the existence with you, and so in the normal histories she will disfunction with a probability very near 1. From the points of view of the machine she will survive in the computation which are the closer with those normal computations (that's explains the comp-immortality, which can already be explained in the inferred QM of nature).

Bruno



Bruno Marchal

unread,
Oct 3, 2011, 1:09:02 PM10/3/11
to everyth...@googlegroups.com

I agree with Craig, although the way he presents it might seems a bit
uncomputationalist, (if I can say(*)).

Thoughts act on matter all the time. It is a selection of histories +
a sharing. Like when a sculptor isolates an art form from a rock, and
then send it in a museum. If mind did not act on matter, we would not
have been able to fly to the moon, and I am not sure even birds could
fly. It asks for relative works and time, and numerous deep
computations.

When you prepare coffee, mind acts on matter. When you drink coffee,
matter acts on mind. No problem here (with comp).

And we can learn to control computer at a distance, but there is no
reason to suppose that computers can't do that.

Bruno

(*) My computer put a read line under that word :)

http://iridia.ulb.ac.be/~marchal/

Craig Weinberg

unread,
Oct 3, 2011, 1:19:08 PM10/3/11
to Everything List


On Oct 3, 8:29 am, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Mon, Oct 3, 2011 at 10:08 AM, Craig Weinberg <whatsons...@gmail.com> wrote:
> > On Oct 2, 7:00 pm, meekerdb <meeke...@verizon.net> wrote:
> >> If they are part of the same thing, then it is presumptuous to say one causes the other.
> >> One might at well say the neurons firing caused the thought of gambling - and in fact that
> >> is what Stathis is saying and for the very good reason that a little electrical
> >> stimulation, that has no "thought" or "sensorimotive" correlate, can cause both neurons
> >> firing AND their correlated thoughts.  But thoughts cannot cause the electrical stimulator
> >> to fire.  So it is *not* bidirectional.
>
> > What do you mean? Thoughts *do* cause an electrical detector to fire.
> > That's what an MRI shows. You could use any kind of electrical probe
> > or sensor instead as long as it is sufficiently sensitive to detect
> > the ordinary firing of a neuron. That's how it's possible to have
> > thought-driven computers.
> >http://www.pcworld.com/article/129889/scientists_show_thoughtcontroll...
>
> The device cited picks up electrical impulses from the scalp. The
> electrical activity comes from the neurons firing in the brain. These
> neurons may have associated thoughts when they fire but this is not
> obvious to an external observer:

So what? It *is* obvious to the internal observer. How can you justify
disqualifying the subject arbitrarily? It is unscientific to cherry
pick the data you prefer and ignore the important data just to make
the observation fit your foregone conclusions. You are just saying
that if we rule out subjectivity, then we must interpret subjectivity
as something else. It's a logical fallacy plus it has no explanatory
power. I'm giving you genuinely fresh insights into the nature of
subjectivity and you're giving me back tired arguments of ultra
instrumentalist pedagogy.

>all that is obvious is that a
> particular neuron fires because of various measurable factors such as
> its resting membrane potential and the neurotransmitter released by
> other neurons with which it interfaces. So to an external observer,
> every neural event has an observable cause, generally other neural
> events.

How do you not see that this is circular thinking? Neurological events
are caused by neurological events, really?

>This means the externally observable behaviour of the brain is
> computable, even though the external observer may not know that the
> brain is conscious.

If the outside observer is unable to factor in the relevant subjective
phenomenology then how would they be able to compute the consequences
of it?

> On the other hand, if the external observer does
> not know about neurotransmitters and receptors he will not be able to
> explain why the neurons fire - it will look to him as if they fire for
> no reason. The mental is supervenient on the physical,

No, the fact of using mental inention to control a computer though the
scalp shows that subjective states can and do control physical
behaviors. I understand that you want to play with it legalistically
to prove your foregone conclusion, but the fact remains that it is the
subject's conscious will which contols the neurons which control the
computer.

>but the mental
> cannot as a separate entity move the physical. If it could, we would
> observe neurons breaking physical laws.
>
No physical laws are broken.

Craig
Message has been deleted

Stathis Papaioannou

unread,
Oct 3, 2011, 8:28:06 PM10/3/11
to everyth...@googlegroups.com
On Tue, Oct 4, 2011 at 9:30 AM, Craig Weinberg <whats...@gmail.com> wrote:
> On Sep 29, 11:14 pm, Jason Resch <jasonre...@gmail.com> wrote:
>> Craig, do the neurons violate the conservation of energy and
>> momentum?  And if not, then how can they have any unexpected effects?
>>
>
> They don't have any unexpected effects, they just have unscheduled
> effects. I don't understand why it makes sense to think that a neuron
> can make another neuron fire but not the person whose brain is to
> cause a neuron to fire. Just think of the brain as a whole as a giant
> neuron making the other ones fire (and vice versa), and we are what
> the inside of that giant neuron is like.

Whether a neuron fires or not depends on its internal state and its
environment, especially the activity of the neurons with which it
interfaces. Whether the door opens depends on the key used, the mass
of the door, the friction in the hinges and the force applied to it.
Maybe the door has the experience of wanting to open if it opens or of
not wanting to open if it doesn't open, in which case we could say
that the door did what it wanted to do. This is perfectly consistent
with our observation of doors since we cannot observe the door qualia.
But the qualia will never move the door contrary to physics. As with
the door, you can say the neuron fired because it wanted to fire and
this could be perfectly consistent with the neuron firing due to the
multiple physical factors. It is the moving that causes the wanting;
if it were the other way around we would see doors opening and neurons
firing magically. I have stated this multiple times in different ways
and you deny that it would be magic, but when an unobservable
influence causes an observable effect that is magic by definition.
Note that I'm not even saying such magic is impossible, just that no
scientist has ever seen it, which is difficult to explain if it
happens all the time as you claim.


--
Stathis Papaioannou

Stathis Papaioannou

unread,
Oct 3, 2011, 8:29:06 PM10/3/11
to everyth...@googlegroups.com
On Tue, Oct 4, 2011 at 4:09 AM, Bruno Marchal <mar...@ulb.ac.be> wrote:

> I agree with Craig, although the way he presents it might seems a bit
> uncomputationalist, (if I can say(*)).
>
> Thoughts act on matter all the time. It is a selection of histories + a
> sharing. Like when a sculptor isolates an art form from a rock, and then
> send it in a museum. If mind did not act on matter, we would not have been
> able to fly to the moon, and I am not sure even birds could fly. It asks for
> relative works and time, and numerous deep computations.
>
> When you prepare coffee, mind acts on matter. When you drink coffee, matter
> acts on mind. No problem here (with comp).
>
> And we can learn to control computer at a distance, but there is no reason
> to suppose that computers can't do that.

Mind acts on matter in a manner of speaking, but matter will not do
anything that cannot be explained in terms of the underlying physics.
An alien scientist could give a complete description of why humans
behave as they do and make a computational model that accurately
simulates human behaviour while remaining ignorant about human
consciousness. But the alien could not do this if he were ignorant
about protein chemistry, for example.


--
Stathis Papaioannou

Craig Weinberg

unread,
Oct 3, 2011, 8:51:08 PM10/3/11
to Everything List
On Oct 3, 11:16 am, Bruno Marchal <marc...@ulb.ac.be> wrote:

> >>> I don't think that there are any arithmetical beings.
>
> >> In which theory?
>
> > In reality.
>
> That type of assertion is equivalent with "because God say so".
> Reality is what we try to figure out.
> If you know for sure what reality is, then I can do nothing, except
> perhaps invite you to cultivate more the modest doubting attitude.

Ok, let's say that I'm mathgnostic. I doubt the existence of
arithmetic beings independent of matter. I am sympathetic to
numerological archetypes as coherent themes (or themes of coherence)
which run through perception but to say that arithmetic spirits haunt
empty space doesn't orient me to anything true or real, it seems like
pure fiction. If it were the case then I would expect five milk
bottles in a group to have the same basic function as five protons in
a nucleus, five boron atoms in a molecule, five cells in a dish, etc.
I just don't see any examples of causally efficacious arithmetic as an
independent agent.

>
>
>
> >>> It's a fantasy,
> >>> or really more of a presumption mistaking an narrow category of
> >>> understanding with a cosmic primitive.
>
> >> You miss the incompleteness discoveries. To believe that arithmetic
> >> is
> >> narrow just tell me something about you, not about arithmetic. It
> >> means that you have a pregodelian conception of arithmetic. We know
> >> today that arithmetic is beyond any conceivable effective
> >> axiomatizations.
>
> > I don't disagree with arithmetic being exactly what you say it is,
> > only that it cannot be realized except through sensorimotive
> > experience. Without that actualization - to be computed neurologically
> > or digitally in semiconductors, analogously in beer bottles, etc, then
> > there is only the idea of the existence of arithmetic, which also is a
> > sensorimotive experience or nothing at all. There is no arithmetic
> > 'out there', it's only inside of matter.
>
> This makes sense with the non-comp theory (which you have not yet
> presented to us).
> In the comp theory, arithmetic is independent of anything, and matter
> is only a perception inside arithmetic.
>

I understand, I just have no reason to consider than anything can be
inside arithmetic, whereas I know for a fact that I am inside my body.
What form of a non-comp theory are you asking for? I will try to
comply.

>
>
> > So yes, arithmetic extends to the inconceivable and nonaxiomatizable
> > but the sensorimotive gestalts underlying arithmetic are much more
> > inconceivable and nonaxiomatizable. A greater infinity.
>
> Inside arithmetic *is* a bigger infinity than arithmetic. It is not
> even nameable.

If it's inside of arithmetic, how can it be bigger than itself?

>
>
>
> >>>> So I see a sort of racism against machine or numbers, justified by
> >>>> unintelligible sentences.
>
> >>> I know that's what you see. I think that it is the shadow of your
> >>> own
> >>> overconfidence in the theoretical-mechanistic perspective that you
> >>> project onto me.
>
> >> You are the one developing a philosophy making human with prosthetic
> >> brain less human, if not zombie.
>
> > I'm not against a prosthetic brain, I just think that it's going to
> > have to be made of some kind of cells that live and die, which may
> > mean that it has to be organic, which may mean that it has to be based
> > on nucleic acids.
>
> Replace in the quote just above "prothetic brain" by " silicon
> prosthetic brain".

I think that if we understand that the brain itself is what is feeling
and thinking, rather than some disembodied computational function,
then we have to consider that the material may not be substitutable,
or if it is, the probability of successful substitution would be
directly proportional to the isomorphism of the biology. If we knew
of a particular computation which did cause life and consciousness to
arise in inanimate objects, then that would be convincing, but thus
far, we have not seen any suggestion of a computer program plotting
against it's programmer or express an unwillingness to be halted.

>
> > Your theory would conclude that we should see
> > naturally evolved brains made out of a variety of materials not based
> > on living cells if we look long enough. I don't think that is
> > necessarily the case.
>
> The theory says that it is *possibly* the case, and the advent of
> computers show it to be the case right now. The difference between
> artificial and natural is ... artificial.
>

But why, if biology has nothing to do with life, and neurology has
nothing to do with consciousness, do we find no non-biological entity
having evolved to live or demonstrate human consciousness. Doesn't
that seem unlikely to you. I understand your point that comp promises
to deliver computers which could be considered as conscious as we are,
but I think that's only because science is hopelessly confused about
what consciousness is.

I agree that there is no literal difference between natural and
artificial, but it's still a glaring deficiency of comp in my mind
that in the history of the Earth there just so happens to not be any
non-organic life at all. Especially if computers, as you seem to
suggest, can adopt consciousness just by functioning in the same
manner as something conscious, then it seems by now there would be
some cave somewhere where the limestone had learned to dance like a
beetle or bloom like a flower.

>
>
> >>>>>>>> This is the kind of strong metaphysical and aristotleian
> >>>>>>>> assumption
> >>>>>>>> which I am not sure to see the need for, beyond extrapolating
> >>>>>>>> from
> >>>>>>>> our
> >>>>>>>> direct experience.
>
> >>>>>>> Is it better to extrapolate only from indirect experience?
>
> >>>>>> It is better to derive from clear assumptions.
>
> >>>>> Clear assumptions can be the most misleading kind.
>
> >>>> But that is the goal. Celar assumption leads to clear misleading,
> >>>> which can then be corrected with respect to facts, or repeatable
> >>>> experiments.
> >>>> Unclear assumptions lead to arbitrariness, racism, etc.
>
> >>> To me the goal is to reveal the truth,
>
> >> That is a personal goal. I don't think that truth can be revealed,
> >> only questioned.
>
> > How can you question it if it is not revealed?
>
> It can be suggested, like in dreams.

So it is better to extrapolate from what our dreams suggest than the
'unclear assumptions' of our ordinary, direct, shared, conscious
experience?

>
>
>
> >>> regardless of the nature of the
> >>> assumptions which are required to get there. If you a priori
> >>> prejudice
> >>> the cosmos against figurative, multivalent phenomenology then you
> >>> just
> >>> confirm your own bias.
>
> >> I don't hide this, and it is part of the scientific (modest)
> >> method. I
> >> assume comp, and I derive consequences in that frame. Everyone is
> >> free
> >> to use this for or against some world view.
>
> > It's a good method for so many things, but not everything, and I'm
> > only interested in solving everything.
>
> You might end up with a theory of everything that you will not been
> able to communicate. You might have fans and disciples (and even
> money) but not students and researchers correcting and extending your
> work.
>

I can't do anything about that. If the world is not interested in the
truth, then I can't change it.

>
>
> >>>>> I don't think there is a microcosmos illusion, unless you are
> >>>>> talking
> >>>>> about the current assumptions of the Standard Model as particles.
> >>>>> That's not an illusion though, just a specialized interpretation
> >>>>> that
> >>>>> doesn't scale up to the macrocosm. As far as where sensorimotive
> >>>>> phenomena comes from, it precedes causality. 'Comes from' is a
> >>>>> sensorimotive proposition and not the other way around. The
> >>>>> singularity functions inherently as supremacy of orientation, and
> >>>>> sense and motive are energetic functions of the difference between
> >>>>> it
> >>>>> and it's existential annihilation through time and space.
>
> >>>> That does not help.
>
> >>> That doesn't help me either.
>
> >> I mean: I don't understand. To much precise terms in a field where we
> >> question the meaning of even simpler terms.
>
> > I have precise terms because I have a precise understanding of what I
> > mean.
>
> To be franc, I don't thing you do have them. I don't take for granted
> most 'familiar' words of natural language, especially bases on
> physicalist conception of reality.

But you take for granted unfamiliar words of unnatural theories,
especially based on anti-physicalist conceptions of simulation as the
only reality.

>
> > I'm saying that causality is an epiphenomena of a feeling of
> > succession, which is a specific category of the sensorimotive palette,
> > like pain or blue.
>
> I can understand this ... by interpreting this in the comp theory,
> making your terms precise (indeed they become numbers, or numbers
> relation, or higher order numbers relations).
> That is why sometimes I can appreciate your intuition: you talk like a
> universal (Löbian) ... machine. But then you are using what you say as
> a critics of mechanism, where the universal machine appears as a
> simple counterexample.

I'm not against machines, they are definitely a huge part of what is
going on, it's just I think I see specifically how arithmetic fits in.
Actually if you have a chance see if you get anything out of my post
today: http://s33light.org/post/10979679238. I am placing math on
another axis opposite art or medium, so that arithmetic or logos is
the generic-universal essence which subject and object share, and
techne is the concrete existence that they share. I suspect that this
is more of an ephiphenomenal duality which runs perpendicular to the
main phenomenal Oriental-Occidental continuum.

>
> > All of these feelings and experiences are generated
> > by the underlying dynamic of the singularity chasing it's tail through
> > the relatively fictional expansion of timespace.
>
> I have no clue what you mean by time, space, relatively fictional,
> dynamic, generated, experiences, feelings ... in your "theory".

Time is the empty container of change (energy). Space is the empty
container of matter. Relatively fictional meaning that time and space
are epiphenomenal gaps created by matter dividing itself. They are not
an independent phenomena, they are just temporary relational gaps as
the singularity creates a fictional subdivision of itself. The
singularity cannot truly divide, since it is outside of timespace
there is nowhere else to put it and no time for such a division to
take place in, so it generates an existential metaphor, which is
dynamic, experiential, and feels.

>
>
>
> >>>>> Specifically, like if you have any two atoms, something must
> >>>>> have a
> >>>>> sense of what is supposed to happen when they get close to each
> >>>>> other.
> >>>>> Iron atoms have a particular way of relating that's different from
> >>>>> carbon atoms, and that relation can be quantified. That doesn't
> >>>>> mean
> >>>>> that the relation is nothing but a quantitative skeleton. There is
> >>>>> an
> >>>>> actual experience going on - an attraction, a repulsion, momentum,
> >>>>> acceleration...various states of holding, releasing, or binding a
> >>>>> 'charge'. What looks like a charge to us under a microscope is in
> >>>>> fact
> >>>>> a proto-feeling with an associated range of proto-motivations.
>
> >>>> Why?
>
> >>> Because that's what we are made of.
>
> >> Why should I take your words for granted.
>
> > You don't have to. You should check it out for yourself and see if it
> > makes sense, and if not, why not?
>
> My attraction to comp, is that it explains to me why the concept of
> primary matter does not make sense. In fact the more general notion of
> "being made of" does not make sense to me (even if it makes sense for
> some universal machine).

We are in agreement there though. Even though the singularity is
matter and energy(experience) to us as human beings, that's only
because of the necessary contrast with space and time which defines us
existentially. Essentially matter is as empty of substance as space
and as filled with law and chaos as mind. My only disagreement with
you on this is that I think that arithmetic is too narrow a logos to
presume to account for subjectivity. You need techne or it's like
ungrounded electricity.

>
>
>
> >>>> ?
> >>>> (I let you know that one of my main motivation consists in
> >>>> explaining
> >>>> the physical, that is explaining it without using physical notions
> >>>> and
> >>>> assumptions. The same for consciousness).
>
> >>> But what you are explaining it with is no more explainable than
> >>> physical notions or assumptions. Why explain what is real in terms
> >>> which are not real?
>
> >> You are just begging the question. You talk like if you knew what is
> >> real or not.
>
> > I know that consciousness is real,
>
> Good. My oldest opponents were disagreeing on this point (a critics
> which does not make much sense).

Heh, yeah, I can maybe see quibbling with the wording of the cogito,
but the spirit of it seems silly to deny.

>
> > and my consciousness through my
> > body tells me that matter is real.
>
> Matter is real. I do agree with this. But matter, assuming comp, is
> not something made of elementary material things. Matter, to be short,
> is the border of the universal mind, as seen by the universal mind. It
> is a real perception of something which is not primarily material, but
> sum up infinities of computations. An instructive image, is the border
> of the Mandelbrot set.

I do understand what you mean, and I almost agree, again, except that
the Mandelbrot set is too literal. It doesn't look like a mind, it
looks like a leaf or a feather. Obsessive, repetitive self-
similarity... definitely part of it, but you need the orientation of
naive sensation and motive to make sense of it. It's the elephant in
the Mandelbrot.

>
> > My consciousness also tells me that
> > some of it's own contents do not matter and it's perceptions do not
> > faithfully render what is real outside of my awareness. I would say
> > that arithmetic truths matter but they are not real, and therefore
> > cannot be manifested in a vacuum - only through some material object
> > which can accomodate the corresponding sensorimotive experiences. You
> > can't write a program that runs on a computer made of only liquid of
> > vapor - you need solid structures to accomodate fixed arithmetic
> > truths. You need the right kinds of matter to express arithmetic
> > truths, but matter does not need arithmetic to experience it's own
> > being.
>
> Not necessarily. You have to give an argument, and there are many
> results which can explain to you how such argument have to be very
> sophisticated. Apparently, in arithmetic, numbers does dream
> coherently (in a first person sharable way) of a stable quantum
> reality, with some symmetries at the bottom, and wavy like
> interferences.
>

I think what you are saying is that matter can arise from arithmetic,
which is possible, but I don't see the difference. Why is arithmetic
easier to explain than matter? I think that my hypothesis rooted in
'sense' (as the relation between matter-space-entropy and energy-time-
significance) is an audaciously Promethean notion which grounds our
perception in a cosmos which is both authentic and participatory, as
well as transcendent and forgiving. From comp I get nothing surprising
beyond the initial appreciation of the depth of possibilities of
arithmetic, which although impressive, strike me as being merely awe
inspiring with no hint at the gravity of the experience of organic
life.

>
>
> >> Now it is the fact that all scientist agree with simple facts like
> >> 1+9=10, etc. Actually they are using such facts already in their
> >> theories. I just show that IF we are machine, THEN those elementary
> >> facts are enough to explain the less elementary one.
>
> > But since we aren't only a machine, then it's a dead end.
>
> You should say: "but since in my theory I am assuming that we are not
> machine, it is a dead end in my theory".

Yes. Not trying to be rude, I just assume that everything I say is
automatically within the disclaimer of 'in my view'.
>
> > It's
> > circular reasoning because you can say we can't prove we're not
> > machines,
>
> I say the exact opposite. We can prove that we are not machine (in
> case we are not machine). If we are (consistent) machine, then we
> cannot prove it.

So how do we prove that we are not machine? Why can't we be both
machine and not machine?

>
> > but the whole idea of 'proving' is mechanical so you are
> > just magnifying the implicit prejudice and getting further from the
> > non-mechanistic truths of awareness.
>
> The human activity of proving is not mechanical(*), but a gentle
> polite proof should be mechanically checkable. You can't say to the
> peer reviewers that for the proposition 13 they have to pray God or
> smoke salvia divinorum. (Or you say it only at the pause cafe, and
> this is for private concerns, not for the publication, unless it is
> paper on salvia or God, but then the goal is no more to prove but to
> suggest a possible empirical discovery).
>
> (*) assuming P ≠ NP.

If peer reviewers demand that a theory which explains subjectivity not
examine subjectivity directly, then they have a priori excluded any
possibility of understanding subjectivity. The peer reviewers are the
problem, not the theory.


>
>
>
> >>>>>>> The link between the
> >>>>>>> sensorimotive and electromagnetic is the invariance between the
> >>>>>>> two.
>
> >>>>>> ?
> >>>>> Feelings and action potentials have some phenomenological overlap.
>
> >>>> What is feeling, what is action, what is potential?
>
> >>> To ask what feeling is can only be sophistry.
>
> >> Not when addressing issues in fundamental cognitive science. Niether
> >> matter nor consciousness should be taken as simple elementary
> >> notions.
>
> > But numbers should be taken as elementary notions?
>
> In the usual mathematical sense. No need of extra metaphysical
> assumption. You just need to believe sentences like "prime numbers
> exists".

They exist in the context of a particular sensorimotive logos, not in
any independent sense. Something like the visible spectrum is a much
stronger primitive as it appears to us unbidden and unexplained as a
shared experience without having to be learned or understood.

> All the material science use this. Despite the claims of some
> philosophers, we just cannot do science without assuming the
> independence of the truth of elementary (first order) arithmetical
> relations.

They can have truth or refer to truth without themselves being
phenomena which exist independently. They aren't a they even, it's
just an ephemeral collection of human ideas about quantitative
universality. I don't see that they describe quality or techne at all.

>
> > That's the problem,
> > you are trying to explain awareness as an epiphenomenon
>
> Awareness is not an epiphenomenon at all. It is a real non illusional
> epistemological phenomenon which is responsible (in some logico-
> arithmetical sense) the rise of physical reality.

If it's not an epiphenomenon, then are you saying it is not a
consequence of arithmetic?

>
> It is: NUMBERS ==> CONSCIOUSNESS/DREAMS ==> SHARABLE DREAMS (physical
> realities).

Isn't that saying consciousness is an epiphenomenon of numbers? What
are numbers without consciousness?

>
> > of cognitive
> > science, when of course cognition arises from feeling (otherwise
> > babies would come out of the womb solving math equations instead of
> > crying, and civilizations should evolve binary codes before
> > ideographic alphabets and cave paintings).
>
> I agree that cognition arise from feelings.
>

Cool

>
>
> >>> It is a primitive of
> >>> human subjectivity, and possibly universal subjectivity. To
> >>> experience
> >>> directly, qualitatively, significantly. An action potential is an
> >>> electromagnetic spike train among neurons. They can be correlated to
> >>> instantiation of feelings.
>
> >> I agree with all this, but that has to be explained, not as taken for
> >> granted.
>
> > How can any primitive be explained?
>
> It can't, by definition. That is why I don't take matter and
> consciousness as primitive, given that we can explain them from
> numbers (and their laws). The contrary is false. We cannot explain
> numbers by matter or consciousness.

I think that we can explain numbers from consciousness. They are
sensorimotive teleological gestures refined and polished into an
instrumental literalism which closely approximates a particular band
of literal sense that we share with many physical, chemical, and
primitive biological phenomena. They do not extend beyond a
superficial treatment of experiences like pain, pleasure, sensation,
humor, poetry, music, etc.

> It can be proved that numbers
> cannot be explained at all. In that sense, they are provably
> necessarily primitive.

No more so than colors or words, thoughts, feelings, being, etc.

>
> > If explanation is to reduce to
> > simpler known phenomena, and primitive is to be the simplest knowable
> > phenomena, then it's a contradiction to explain it any further. We can
> > only place it into a meaningful context, which I think my hypothesis
> > does.
>
> >>>>> That's the link. They both map to the same changes at the same
> >>>>> place
> >>>>> and time, they just face opposite directions. Electromagnetism is
> >>>>> public front end, sensorimotive is private back end, which for us
> >>>>> can
> >>>>> focus it's attention toward the front, back, or the link in
> >>>>> between.
>
> >>>> ?
>
> >>> Electromagnetic and sensorimotive phenomena are opposite sides of
> >>> the
> >>> same thing. I don't know how I could make it more clear.
>
> >> That is your main problem.
>
> > Ok, but what isn't clear? Opposite? 'same thing'? Electromagentism?
> > Sensorimotive?
>
> Yes, all that.
>

If you observe a living brain under an MRI, you can detect certain
changes in the equipment which can be correlated meaningfully with the
experiences of the subject being examined. So these physical changes
in the brain which are contagious to the MRI's antennae are part of
the 'same thing' as the experiences of the subject - they have the
same rhythmic patterns, instantiation, and duration. The content,
however is precisely the opposite: The MRI patterns are topological
regions of activity in a 3D space, without any particular meaning or
significance, but with great specificity in terms of precise location
and public verifiability. The subjective experience is literally the
opposite. Not topological in space but perceptual in time. If you
shorten the interval too much, you lose the sense of the perception
entirely, but the electromagnetic pattern does not vanish. The
subjective experience has significance and meaning. Without the
experience side of it, the neural correlate would be no more
interesting than examining sand dunes. Without taking significance
into account, there is no purpose to examine the MRI in the first
place.

>
>
> > Electromagnetism is the name we give to the various phenomena of
> > matter across space - waving, attracting, repulsing, moving,
> > intensifying, discharging, radiating, accumulating density, surfaces,
> > depth, consistency, etc. Sensorimotivation is the name I'm giving to
> > the various phenomena of experience (energy) through time - detecting,
> > sensing, feeling, being, doing, intention, image, emotion, thought,
> > meaning, symbol, archetype, metaphor, semiotics, communication,
> > arithmetic, etc.
>
> That's what the numbers can explain, and that what cannot explain the
> numbers (without assuming them implicitly).

I think that numbers can't explain any of that without the a priori
expectation of those experiences. Numbers by themselves do not suggest
anything but more numbers. They have no capacity to recognize their
own patterns, only to be recognized by the computational shadows cast
by concretely embodied agents of sense and motive.
I'm going to have to think about that for a while, but it helps I
think. So you are saying that the difference between electromagnetic
and sensorimotive is that sensorimotive includes p or the arithmetic
itself, the content, while electromagnetism contains only the
computational consequences of the arithmetic. Yeah, if that's what you
are saying I like it. It gives me something new. I don't think it
captures the significance of what the presence of p does as far as
making sensorimotive analog through time and electromagnetic being
discrete across space.

>
> >> Hmm... The difference between subjective and sensorimotive would be
> >> captured by the difference between Bp & p, and Bp & Dt & p. That
> >> confirms my feeling described above.
>
> > I'll get back to you if you can explain the variables better. I tried
> > Googling them but nothing clear comes up for me.
>
> I hope that what I wrote above helps a bit. There are good book on the
> subject, but you need to follow some course in mathematical logic, to
> get familiar with it.

I think that there is a cost associated with relying exclusively on
mathematical logic in a TOE though. My hypothesis shows how modal
agreements magnify the in-language and attenuate the outward
sensitivity. Like a gaggle of teenagers hanging around in a pack,
talking to each other incessantly and oblivious to the world.
Not necessarily. It just may be that consciousness is a spatio-
temporal event calibrated to a specific circumstance within the
firmament of the singularity which cannot be simulated. It's a fixed
MAC address not only of a precise location and time relative to the
absolute which cannot be spoofed, but a precise circumstantial pattern
of energy and matter, so that the exact circumstance of someone's
birth - the thoughts and feelings of the doctor and nurse, the sound
of the cars outside, the proximity to the vineyards and the
ocean...all of that may need to be reproduced to instantiate a
particular identity.

>
>
>
> >> We would also be led to the peculiar situation
> >> where machine could correctly prove that they are not machine,
>
> > I don't see how matter as a primitive makes machines able to prove
> > that they are not machines.
>
> I was unclear. What I say is that if a machine convince herself, with
> your help perhaps, that some pirimitive matter exists and has a role
> for the instantiation of her consciousness, then such a machine will
> eventually conclude (by a way similar to UDA) that she is not a
> machine. If such machine is ideally correct, she would conclude
> correctly that she is not a machine. This comes from the fact that the
> UDA reasonning can be done by machines (as AUDA illustrated in some
> admittedly abstract way). You might intuit this if you take the time
> to follow the UD argument.
>

Hmm. Not sure I get it. I sort of get that the mathematical
proposition of a matter-like topology would give rise to some novelty
through computational non-accessibility but I don't know that the
novelty would necessarily seem non-mechanical.

> > I think a machine machine (or something we
> > presume is a machine) proves whether of not it is a machine by how it
> > responds to errors or hardware failures.
>
> A machine can never prove, still less know, that she is a machine.
> Even machine have to make a leap of faith to admit mechanism. Most
> machines will be 'naturally' against comp, before introspecting
> deeper, and reasoning deeper, so that they can infer the possibility
> (but nothing more).

I'm not against the reasoning of that, I just don't think it's a
compelling basis for rich perception. Sure, everyone's reality tunnel
looks like reality and not a tunnel, but that doesn't explain why the
contents of the tunnel are so interesting and so real.

>
> > You could maybe say that what
> > we are made of is an accumulation of the universe's favorite errors,
> > failures, and aberrations.
>
> Partially, yes. Even partial lies. Perhaps. I'm not sure.

Sure, yes. Partial lies are probably the only way to be certain of
keeping truth alive. Indra's Net of Bullshit, haha. Seriously though,
you need the alchemical base alloys to hide the precious metal within,
otherwise it wouldn't be precious.

>
>
>
> >> making
> >> all possible discourses of machine being of the type Bf. You might
> >> eventually change my mind on the non provability of comp (as opposed
> >> to the non recognizability of the our level of comp). For this you
> >> should convince the machine that material is necessarily primitive. I
> >> begin to doubt that non-comp can make any sense. Hmm...
>
> > If I pull the plug on the machine, then the machine halts. Why should
> > that be the case were machine independent of material substrate?
>
> Because machines can have long and complex computational histories.
> If you pull the plug on the machine, you act on her 3-body that she
> share the existence with you, and so in the normal histories she will
> disfunction with a probability very near 1. From the points of view of
> the machine she will survive in the computation which are the closer
> with those normal computations (that's explains the comp-immortality,
> which can already be explained in the inferred QM of nature).

So a computer keeps computing even when you turn it off? That would be
hard to swallow if you are saying that.

Craig

Craig Weinberg

unread,
Oct 3, 2011, 8:56:43 PM10/3/11
to Everything List
On Oct 3, 8:28 pm, Stathis Papaioannou <stath...@gmail.com> wrote:
We do see neurons firing in response to no other stimulation other
than the subjects conscious attention and intention. It's not magic,
it's how it actually works. It's how you are making sense of these
words right now. You can have your neurons move a mouse around, either
through your hand or directly through one of those scalp rigs. It's
only magic if you arbitrarily deny the subjects observation of their
own subjective behavior.

Craig

Craig Weinberg

unread,
Oct 3, 2011, 9:08:38 PM10/3/11
to Everything List
On Oct 3, 8:29 pm, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Tue, Oct 4, 2011 at 4:09 AM, Bruno Marchal <marc...@ulb.ac.be> wrote:
> > I agree with Craig, although the way he presents it might seems a bit
> > uncomputationalist, (if I can say(*)).
>
> > Thoughts act on matter all the time. It is a selection of histories + a
> > sharing. Like when a sculptor isolates an art form from a rock, and then
> > send it in a museum. If mind did not act on matter, we would not have been
> > able to fly to the moon, and I am not sure even birds could fly. It asks for
> > relative works and time, and numerous deep computations.
>
> > When you prepare coffee, mind acts on matter. When you drink coffee, matter
> > acts on mind. No problem here (with comp).
>
> > And we can learn to control computer at a distance, but there is no reason
> > to suppose that computers can't do that.
>
> Mind acts on matter in a manner of speaking, but matter will not do
> anything that cannot be explained in terms of the underlying physics.

No matter how many times I say that you do not understand what I mean
if you still bring this up, you still bring it up again and again. The
only one talking about defying physics is you. The fact is, we have no
physical explanation of why neurons in the amygdala suddenly
depolarize their membranes when the subject thinks about gambling so
it breaks no physical law to make the obvious conclusion that this
subjective intention is itself the cause, rather than an arbitrary
cluster of neurons having some peculiar sensitivity to secondhand
associations of the shapes of playing cards, dice, horses (sometimes),
slot machines, racing cars, sports games (sometimes), etc.. It makes
no sense as an a-signifying neurochemical process. It only makes sense
as a signifying conscious narrative.

> An alien scientist could give a complete description of why humans
> behave as they do and make a computational model that accurately
> simulates human behaviour while remaining ignorant about human
> consciousness. But the alien could not do this if he were ignorant
> about protein chemistry, for example.

I say wrong and wrong. An alien scientist simulating human behavior
without any understanding of human consciousness cannot give a
complete description of why humans behave at all. They would not have
the foggiest idea what to make of a movie or a joke or a baseball
game. It's an absurd suggestion. It is to say that there is no
difference between the pirates in Pirates of the Caribbean and actual
pirates.

Your counterexample fails even more completely. An alien who was
familiar with human consciousness would need only read a book or watch
a movie to be able to simulate human behavior in any mode -
biological, cinematic, verbal re-enactment, miniature sculpture, etc.
No chemical or neurological understanding would be required or even
relevant.

Craig

Stathis Papaioannou

unread,
Oct 3, 2011, 9:22:06 PM10/3/11
to everyth...@googlegroups.com
On Tue, Oct 4, 2011 at 11:56 AM, Craig Weinberg <whats...@gmail.com> wrote:

> We do see neurons firing in response to no other stimulation other
> than the subjects conscious attention and intention. It's not magic,
> it's how it actually works. It's how you are making sense of these
> words right now. You can have your neurons move a mouse around, either
> through your hand or directly through one of those scalp rigs. It's
> only magic if you arbitrarily deny the subjects observation of their
> own subjective behavior.

The neurons are firing in my brain as I'm thinking, but if you could
go down to the microscopic level you would see that they are firing
due to the various physical factors that make neurons fire, eg. fluxes
of calcium and potassium caused by ion channels opening due to
neurotransmitter molecules binding to the receptors and changing their
conformation. If you take each neuron in the brain in turn at any
given time it will always be the case that it is doing what it is
doing due to these factors. You will never find a ligand-activated ion
channel opening in the absence of a ligand, for example. That would be
like a door opening in the absence of any force. Just because doors
and protein molecules are different sizes doesn't mean that one can do
magical things and the other not.


--
Stathis Papaioannou

Craig Weinberg

unread,
Oct 3, 2011, 11:30:36 PM10/3/11
to Everything List
On Oct 3, 9:22 pm, Stathis Papaioannou <stath...@gmail.com> wrote:
You will also never find a ligand activated ion channel that is
associated with a particular subjective experience fire in the absence
of that subjective experience (that would be a zombie, right?), so why
privilege the pixels of the thing as the determining factor when the
overall image is just as much dictating which pixels are lit and how
brightly? Again, every time you mention magic it just means that you
don't understand my point. Every time you mention it, I am going to
give you the same response. I understand your position completely, but
you are just throwing dirt clods in the general direction of mine
while closing your eyes.

Craig

Stathis Papaioannou

unread,
Oct 4, 2011, 2:11:11 AM10/4/11
to everyth...@googlegroups.com
On Tue, Oct 4, 2011 at 2:30 PM, Craig Weinberg <whats...@gmail.com> wrote:

>> The neurons are firing in my brain as I'm thinking, but if you could
>> go down to the microscopic level you would see that they are firing
>> due to the various physical factors that make neurons fire, eg. fluxes
>> of calcium and potassium caused by ion channels opening due to
>> neurotransmitter molecules binding to the receptors and changing their
>> conformation. If you take each neuron in the brain in turn at any
>> given time it will always be the case that it is doing what it is
>> doing due to these factors. You will never find a ligand-activated ion
>> channel opening in the absence of a ligand, for example. That would be
>> like a door opening in the absence of any force. Just because doors
>> and protein molecules are different sizes doesn't mean that one can do
>> magical things and the other not.
>
> You will also never find a ligand activated ion channel that is
> associated with a particular subjective experience fire in the absence
> of that subjective experience (that would be a zombie, right?), so why
> privilege the pixels of the thing as the determining factor when the
> overall image is just as much dictating which pixels are lit and how
> brightly? Again, every time you mention magic it just means that you
> don't understand my point. Every time you mention it, I am going to
> give you the same response. I understand your position completely, but
> you are just throwing dirt clods in the general direction of mine
> while closing your eyes.

The ion channel only opens when the ligand binds. The ligand only
binds if it is present in the synapse. It is only present in the
synapse when the presynaptic neuron fires. And so on. This whole
process is associated with an experience, but it is a completely
mechanical process. The equivalent is my example of the door: it opens
because someone turns the key and pushes it. If it had qualia it may
also be accurate to say that it opens because it wants to open, but
since we can't see the qualia they can't have a causal effect on the
door. If they could we would see the door opening by itself and we
would be amazed. It's the same with the neuron: if the associated
qualia had a causal effect on matter we would see neurons firing in
the absence of stimuli, which would be amazing.

Again, it's not that it's wrong to say that the neurons fired in the
amygdala because the person thought about gambling, it's that the
third person observable behaviour of the neurons can be entirely
explained and predicted without any reference to qualia. If the
neurons responded directly to qualia they would be observed to do
miraculous things and it may not be possible to predict or model their
behaviour.


--
Stathis Papaioannou

Craig Weinberg

unread,
Oct 4, 2011, 8:39:47 AM10/4/11
to Everything List
On Oct 4, 2:11 am, Stathis Papaioannou <stath...@gmail.com> wrote:

>
> The ion channel only opens when the ligand binds. The ligand only
> binds if it is present in the synapse. It is only present in the
> synapse when the presynaptic neuron fires. And so on.

It's the 'and so on' where your explanation breaks down. You are
arbitrarily denying the top down, semantic, subjective participation
as a cause. There is no presynaptic neuron prior to the introduction
of the thought of gambling. The thought is the firing of many neurons.
They are the same thing, except that the reason they are firing is
because of the subject choosing to realize a particular motivation (to
think about something or move a mouse, etc). There is no neurological
reason why those neurons would fire. They would not otherwise fire at
that particular time.

>This whole
> process is associated with an experience, but it is a completely
> mechanical process.

Starting a car initiates a mechanical process, and driving a car
executes a mechanical process, but without the driver choosing to
start the car and use the steering wheel and pedals to correspond with
their subjective perception and motivation, the car doesn't do
anything but idle. You cannot predict where a car is going to go based
on an auto mechanics examination of the car. I can argue this point
all day, every day. I can give you different examples, describe it in
different ways, but I can't make you see what you are missing. I know
exactly your position. You think that if you look at atoms they cannot
do anything except what we expect any generic atom to do, and since
everything is made of atoms, then everything can only be an
elaboration of those probabilities. I get that. You don't need to
restate your position to me ever again. You are quite clear in what
you are saying. I'm telling you that it's medieval compared to what
I'm talking about.

You aren't seeing that atoms respond to their environment - they have
charge and make bonds, and that the environment can change on a macro
scale for macro scale reasons just as well as the macro scale can be
changed for microcosmic reasons. They are the same thing. Just as I am
choosing these letters to make up these words because I have a
sentence in mind that I want to write, not because my fingers have no
choice but to hit these keys to satisfy some chemical or physical law.

>The equivalent is my example of the door: it opens
> because someone turns the key and pushes it. If it had qualia it may
> also be accurate to say that it opens because it wants to open, but
> since we can't see the qualia they can't have a causal effect on the
> door.

Someone turns the key and pushes it because they want to. It is their
qualia that has a causal effect on the door and *nothing else*. The
intentionality of the subject *uses* the neurons of the brain, which
use the afferent nerves down the spine, which uses the muscle tissue
to contract, which moves the arm connected to the hand that holds the
key and articulates the turning and opens the door which satisfies the
sensory>motive>motive>motor>motor>motor>sensory chain of custody. The
door opens because the person sees the door (visual sense),
understands how it works and that they have the key (cognitive sense),
wants to unlock it (motive intent, emotional sense), is able to use
their brain, spinal cord, arm, hand, and key as a single coordinated
instrument (motive>motive>motor>fine motor>motor extension) to satisfy
their desire to feel and see that the door is open (sensory) and to
pass through the door (motor).

Yes, I understand that you can look at it the other way and say that
since it it the brain that stimulates and coordinates the arm, and it
is the brain's activity that causes that, and that the neurons in the
brain cause that, and that the ion channels, membrane potentials,
neurotransmitter molecules, and atoms that cause all of that, then you
should be able to calculate from the positions of all of that
microcosmic phenomana that the door will open. But it doesn't work
that way. The microcosmos doesn't know what a door is. It has a very
complex job to do already in it's own biochemical level of the
universe. Just as we have no direct awareness of what our DNA is
doing, our tissues don't know who we are or why we want to open the
door. Only we know that.

> If they could we would see the door opening by itself and we
> would be amazed. It's the same with the neuron: if the associated
> qualia had a causal effect on matter we would see neurons firing in
> the absence of stimuli, which would be amazing.

The qualia is the stimuli. Why else do you think it's there? What
would be the point of qualia if not to exert an influence on the
choices we make?

>
> Again, it's not that it's wrong to say that the neurons fired in the
> amygdala because the person thought about gambling, it's that the
> third person observable behaviour of the neurons can be entirely
> explained and predicted without any reference to qualia.

They cannot be predicted any more than an auto mechanic can predict
where a car is going to go. They can explain the mechanism's
superficial function, but they can't make sense of the purpose - the
sense or motive. We can find only what and how and where in the
neuron, not the who and the why and the when. You need all six to
really 'explain' or predict.

>If the
> neurons responded directly to qualia they would be observed to do
> miraculous things and it may not be possible to predict or model their
> behaviour.

They do respond directly to qualia. Some people do feel that life and
free will is miraculous, that's up to you - because you can choose
your opinion and your brain will follow your lead. You are not only a
puppet of your neurology, not completely, or you could not even
question it in the first place because non-determinism would be
inconceivable. It's not inconceivable to me. It's a clear and obvious
as these letters I'm choosing to type here and you are choosing to
read.

Craig

Quentin Anciaux

unread,
Oct 4, 2011, 8:54:34 AM10/4/11
to everyth...@googlegroups.com


2011/10/4 Craig Weinberg <whats...@gmail.com>

On Oct 4, 2:11 am, Stathis Papaioannou <stath...@gmail.com> wrote:

>
> The ion channel only opens when the ligand binds. The ligand only
> binds if it is present in the synapse. It is only present in the
> synapse when the presynaptic neuron fires. And so on.

It's the 'and so on' where your explanation breaks down. You are
arbitrarily denying the top down, semantic, subjective participation
as a cause. There is no presynaptic neuron prior to the introduction
of the thought of gambling.

And where is the thought then ? Reading you, it exists outside of the brain matter... If it is the brain matter, then all the external observable is all there is to it, reproducing the external behaviours will reproduce qualia.

 
The thought is the firing of many neurons.
They are the same thing, except that the reason they are firing is
because of the subject choosing to realize a particular motivation (to
think about something or move a mouse, etc). There is no neurological
reason why those neurons would fire. They would not otherwise fire at
that particular time.

>This whole
> process is associated with an experience, but it is a completely
> mechanical process.

Starting a car initiates a mechanical process, and driving a car
executes a mechanical process, but without the driver choosing to
start the car and use the steering wheel and pedals to correspond with
their subjective perception and motivation, the car doesn't do
anything but idle. You cannot predict where a car is going to go based
on an auto mechanics examination of the car.

No, but I can build a copy of the car which will do the same as the car provided a driver drives it...
 
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.




--
All those moments will be lost in time, like tears in rain.

Craig Weinberg

unread,
Oct 4, 2011, 10:04:11 AM10/4/11
to Everything List
On Oct 4, 8:54 am, Quentin Anciaux <allco...@gmail.com> wrote:
> 2011/10/4 Craig Weinberg <whatsons...@gmail.com>
>
> > On Oct 4, 2:11 am, Stathis Papaioannou <stath...@gmail.com> wrote:
>
> > > The ion channel only opens when the ligand binds. The ligand only
> > > binds if it is present in the synapse. It is only present in the
> > > synapse when the presynaptic neuron fires. And so on.
>
> > It's the 'and so on' where your explanation breaks down. You are
> > arbitrarily denying the top down, semantic, subjective participation
> > as a cause. There is no presynaptic neuron prior to the introduction
> > of the thought of gambling.
>
> And where is the thought then ? Reading you, it exists outside of the brain
> matter... If it is the brain matter, then all the external observable is all
> there is to it, reproducing the external behaviours will reproduce qualia.

It's inside (and 'throughside') of matter. It doesn't ex-ist, it
insists. Reproducing the external behaviors won't help, any more than
attaching marionette strings to a cadaver would bring a person back to
life.

I think that all change has an experience associated with it. This is
in fact what energy is; an experience of perception over time. The
ability to experience change first hand carries with it, by extension,
the ability to experience certain kinds of change second hand. We are
made of matter, so we can relate to physical changes - a bowling ball
striking pins, a bomb going off, etc. We are made of biological cells
so we can relate to biological changes, but non-biological matter
cannot experience biological changes. Bowling balls don't feel like
they are alive.

>
>
>
>
>
>
>
>
>
> > The thought is the firing of many neurons.
> > They are the same thing, except that the reason they are firing is
> > because of the subject choosing to realize a particular motivation (to
> > think about something or move a mouse, etc). There is no neurological
> > reason why those neurons would fire. They would not otherwise fire at
> > that particular time.
>
> > >This whole
> > > process is associated with an experience, but it is a completely
> > > mechanical process.
>
> > Starting a car initiates a mechanical process, and driving a car
> > executes a mechanical process, but without the driver choosing to
> > start the car and use the steering wheel and pedals to correspond with
> > their subjective perception and motivation, the car doesn't do
> > anything but idle. You cannot predict where a car is going to go based
> > on an auto mechanics examination of the car.
>
> No, but I can build a copy of the car which will do the same as the car
> provided a driver drives it...

Do the same thing meaning idle in the driveway, sure. To copy a driver
is something else entirely. You still can't predict where either
driver is going to take the car from looking that the mechanics of the
car.

Craig

Bruno Marchal

unread,
Oct 4, 2011, 10:24:37 AM10/4/11
to everyth...@googlegroups.com

On 04 Oct 2011, at 02:29, Stathis Papaioannou wrote:

> On Tue, Oct 4, 2011 at 4:09 AM, Bruno Marchal <mar...@ulb.ac.be>
> wrote:
>
>> I agree with Craig, although the way he presents it might seems a bit
>> uncomputationalist, (if I can say(*)).
>>
>> Thoughts act on matter all the time. It is a selection of histories
>> + a
>> sharing. Like when a sculptor isolates an art form from a rock, and
>> then
>> send it in a museum. If mind did not act on matter, we would not
>> have been
>> able to fly to the moon, and I am not sure even birds could fly. It
>> asks for
>> relative works and time, and numerous deep computations.
>>
>> When you prepare coffee, mind acts on matter. When you drink
>> coffee, matter
>> acts on mind. No problem here (with comp).
>>
>> And we can learn to control computer at a distance, but there is no
>> reason
>> to suppose that computers can't do that.
>
> Mind acts on matter in a manner of speaking, but matter will not do
> anything that cannot be explained in terms of the underlying physics.

Locally, you are right. But the physics itself arise from the
arithmetical computation structures on which consciousness supervene
on (to be short). So I am not sure if the expression of consciousness
duration for very short "emulation time" makes sense.
In fact, between any two sequential computational states *at some
level of description*, there exist an infinity of computational states
belonging to computations generated by the UD going through them *at
some more refined level, and this participates in the first person
experience generation (as in its material constitution).

> An alien scientist could give a complete description of why humans
> behave as they do and make a computational model that accurately
> simulates human behaviour while remaining ignorant about human
> consciousness. But the alien could not do this if he were ignorant
> about protein chemistry, for example.


OK.

Bruno

http://iridia.ulb.ac.be/~marchal/


meekerdb

unread,
Oct 4, 2011, 2:59:27 PM10/4/11
to everyth...@googlegroups.com

This goes by the name "causal completeness"; the idea that the 3-p observable state at t
is sufficient to predict the state at t+dt. Craig wants add to this that there is
additional information which is not 3-p observable and which makes a difference, so that
the state at t+dt depends not just on the 3-p observables at t, but also on some
additional "sensorimotive" variables. If you assume these variables are not independent
of the 3-p observables, then this is just panpsychic version of consciousness supervening
on the 3-p states. They are redundant in the informational sense. If you assume they
are independent of the 3-p variables and yet make a difference in the time evolution of
the state then it means the predictions based on the 3-p observables will fail, i.e. the
laws of physics and chemistry will be violated.

Of course this violation maybe hard to detect in something very complicated like a brain;
but Craig's theory doesn't seem to assume the brain is special in that respect and even a
single electron supposedly has these extra, unobservable variables, i.e. a mind of its
own. The problem with electrons or other simple systems is that while we have complete
access to their 3-p variables, we don't have access to their hypothetical other variables;
the ones we call 1-p when referring to humans. So when all the silver atoms in a
Stern-Gerlach do just as we predict, it can be claimed that they all had the same 1-p
variables and that's why the 3-p variables were sufficient to predict their behavior.

So the only way I see to test this theory, even in principle, would be to observe Craig's
brain at a very low level while having him report his experiences (at least to himself)
and show that his experiences and his brain states were not one-to-one. Of course this is
probably impossible with current technology. Observing the brain at a coarse grained
level leaves open the possibility that one is just missing the 3-p variables that you show
the relationship to be one-to-one.

So I'd say that until someone thinks of an empirical test for this "soul theory",
discussing it is a waste of bandwidth.

Brent

Craig Weinberg

unread,
Oct 4, 2011, 8:15:59 PM10/4/11
to Everything List
On Oct 4, 2:59 pm, meekerdb <meeke...@verizon.net> wrote:

>
> This goes by the name "causal completeness"; the idea that the 3-p observable state at t
> is sufficient to predict the state at t+dt.  Craig wants add to this that there is
> additional information which is not 3-p observable and which makes a difference, so that
> the state at t+dt depends not just on the 3-p observables at t, but also on some
> additional "sensorimotive" variables.  If you assume these variables are not independent
> of the 3-p observables, then this is just panpsychic version of consciousness supervening
> on the 3-p states.  They are redundant in the informational sense.   If you assume they
> are independent of the 3-p variables and yet make a difference in the time evolution of
> the state then it means the predictions based on the 3-p observables will fail, i.e. the
> laws of physics and chemistry will be violated.

Why would they have to be either completely dependent or independent?
I've given several examples demonstrating how we routinely exercise
voluntary control over parts of our minds, bodies, and environment
while at the same time being involuntarily controlled by those same
influences, often at the same time. This isn't a theory, this is the
raw data set.

If it were the case that the 3p and 1p were completely independent,
then you would have ghosts jumping around into aluminum cans and
walking around singing, and if they were completely dependent then
there would be no point in being able to differentiate between
voluntary and involuntary control of our mind, body, and environment.
Such an illusory distinction would not only be redundant but it would
have no ontological basis to even be able to come into being or be
conceivable. It would be like an elephant growing a TV set out of it's
trunk to distract it from being an elephant.

Since neither of those two cases is possible, I propose, as I have
repeatedly proposed, that the 3p and 1p are in fact part of the same
essential reality in which they overlap, but that they each extent in
different topological directions; specifically, 3p into matter, public
space, electromagnetism, entropy, and relativity, and 1p into energy,
private time, sensorimotive, significance, and perception.

No laws of physics are broken by consciousness, but it is very
confusing because our only example of consciousness is human
consciousness, which is a multi-trillion cell awareness. The trick is
to realize that you cannot directly correlate our experience of
consciousness with the 3-p cellular phenomenology, but to only
correlate it with the 3-p behavior of the brain as a whole. That's the
starting point. If you are going to try to understand what a movie is
about, you have to look at the whole images of the movie, and not
focus on the pixels of the screen or the mechanics of pixel
illumination to guide your interpretation. There is no human
consciousness at that low level. There may be sensorimotive 1-p
phenomenology there, and I think that there is, but we can't prove it
now. What we can prove is there in 3-p would only relate to that low
level 1-p which is unknown to us.

My proposition is that our 1-p consciousness builds from lower level 1-
p awareness and higher level 1-p semantic environmental influences,
like cultural ideas, family traditions, etc. It is not predictable
from 3-p appearances alone, but not because it breaks the laws of
physics. Physics has nothing to say about what particular patterns
occur in the brain as a whole. There is no relevant biochemical
difference between a one thought and another that could make it
impossible physically, just as there is no sequence of illuminated
pixels that is preferred by a TV screen, or electronics, or physics.

>
> Of course this violation maybe hard to detect in something very complicated like a brain;
> but Craig's theory doesn't seem to assume the brain is special in that respect and even a
> single electron supposedly has these extra, unobservable variables, i.e. a mind of its
> own.  

No. I have never said that a particle has a mind of it's own, I only
say that it may have a sensorimotive quality which is primitive like
charge or spin, but that this quality scales up in a different way
than quantitative properties. The brain is very special *to us* and I
suspect that it is pretty special relatively speaking as far as
processes in the Cosmos. It's not special because it has awareness
though, it's just the degree to which that awareness is elaborated and
concentrated.

>The problem with electrons or other simple systems is that while we have complete
> access to their 3-p variables, we don't have access to their hypothetical other variables;
> the ones we call 1-p when referring to humans.  So when all the silver atoms in a
> Stern-Gerlach do just as we predict, it can be claimed that they all had the same 1-p
> variables and that's why the 3-p variables were sufficient to predict their behavior.

Why is that a problem?

> So the only way I see to test this theory, even in principle, would be to observe Craig's
> brain at a very low level while having him report his experiences (at least to himself)
> and show that his experiences and his brain states were not one-to-one.

No, I'm not saying that 1-p and 3-p are not synchronized, they are
synchronized, but that doesn't mean that voluntary choices supervene
on default neurological processes. Look at how our diaphragm works. We
can voluntarily control our breathing to a certain extent, but there
are involuntary default behaviors as well. This does not mean that we
can't decide to hold our breath or that it can only be our body which
is doing the deciding. How do you explain the appearance of voluntary
control of our body?

>Of course this is
> probably impossible with current technology.  Observing the brain at a coarse grained
> level leaves open the possibility that one is just missing the 3-p variables that you show
> the relationship to be one-to-one.
>
> So I'd say that until someone thinks of an empirical test for this "soul theory",
> discussing it is a waste of bandwidth.

Way to argue from authority. "Your thoughts are a waste of everyone's
time unless I think that they can be proved to my satisfaction".

Craig

meekerdb

unread,
Oct 4, 2011, 8:46:51 PM10/4/11
to everyth...@googlegroups.com
On 10/4/2011 5:15 PM, Craig Weinberg wrote:
> On Oct 4, 2:59 pm, meekerdb<meeke...@verizon.net> wrote:
>
>> This goes by the name "causal completeness"; the idea that the 3-p observable state at t
>> is sufficient to predict the state at t+dt. Craig wants add to this that there is
>> additional information which is not 3-p observable and which makes a difference, so that
>> the state at t+dt depends not just on the 3-p observables at t, but also on some
>> additional "sensorimotive" variables. If you assume these variables are not independent
>> of the 3-p observables, then this is just panpsychic version of consciousness supervening
>> on the 3-p states. They are redundant in the informational sense. If you assume they
>> are independent of the 3-p variables and yet make a difference in the time evolution of
>> the state then it means the predictions based on the 3-p observables will fail, i.e. the
>> laws of physics and chemistry will be violated.
> Why would they have to be either completely dependent or independent?

Did I use the word "completely"?

> I've given several examples demonstrating how we routinely exercise
> voluntary control over parts of our minds, bodies, and environment
> while at the same time being involuntarily controlled by those same
> influences, often at the same time. This isn't a theory, this is the
> raw data set.

No it's not. In your examples of voluntary control you don't know what your brain is
doing. So you can't know whether you "voluntary" action was entirely caused by physical
precursors or whether their was some effect from libertarian free-will.

>
> If it were the case that the 3p and 1p were completely independent,
> then you would have ghosts jumping around into aluminum cans and
> walking around singing, and if they were completely dependent then
> there would be no point in being able to differentiate between
> voluntary and involuntary control of our mind, body, and environment.

Exactly the point of compatibilist free-will.

> Such an illusory distinction would not only be redundant but it would
> have no ontological basis to even be able to come into being or be
> conceivable. It would be like an elephant growing a TV set out of it's
> trunk to distract it from being an elephant.

Or pulling another meaningless example out of the nether regions.

>
> Since neither of those two cases is possible, I propose, as I have
> repeatedly proposed, that the 3p and 1p are in fact part of the same
> essential reality in which they overlap, but that they each extent in
> different topological directions;

What's a topological direction?

> specifically, 3p into matter, public
> space, electromagnetism, entropy, and relativity, and 1p into energy,
> private time, sensorimotive, significance, and perception.

"3p overlaps into entropy"!? Reads like gibberish to me.

>
> No laws of physics are broken by consciousness, but it is very
> confusing because our only example of consciousness is human
> consciousness, which is a multi-trillion cell awareness.

Exactly what I said. In fact one's only example of consciousness is their own. The
consciousness of other humans is an inference.

> The trick is
> to realize that you cannot directly correlate our experience of
> consciousness with the 3-p cellular phenomenology, but to only
> correlate it with the 3-p behavior of the brain as a whole.

That's the experimental question, and you don't know the answer.

> That's the
> starting point. If you are going to try to understand what a movie is
> about, you have to look at the whole images of the movie, and not
> focus on the pixels of the screen or the mechanics of pixel
> illumination to guide your interpretation. There is no human
> consciousness at that low level. There may be sensorimotive 1-p
> phenomenology there, and I think that there is, but we can't prove it
> now. What we can prove is there in 3-p would only relate to that low
> level 1-p which is unknown to us.
>
> My proposition is that our 1-p consciousness builds from lower level 1-
> p awareness and higher level 1-p semantic environmental influences,
> like cultural ideas, family traditions, etc.

But that is entirely untestable since we have no access to those 1-p consciousnesses.
Cultural ideas, family traditions are 3-p observables.

> It is not predictable
> from 3-p appearances alone, but not because it breaks the laws of
> physics. Physics has nothing to say about what particular patterns
> occur in the brain as a whole.

Sure it does - unless magic happens.

> There is no relevant biochemical
> difference between a one thought and another that could make it
> impossible physically,

So you say. But I think there is. If you think of an elephant there is something
biochemical happening that makes it not a thought about a giraffe. So when you read
"elephant" it is impossible to think of a giraffe at that moment.

> just as there is no sequence of illuminated
> pixels that is preferred by a TV screen, or electronics, or physics.
>
>> Of course this violation maybe hard to detect in something very complicated like a brain;
>> but Craig's theory doesn't seem to assume the brain is special in that respect and even a
>> single electron supposedly has these extra, unobservable variables, i.e. a mind of its
>> own.
> No. I have never said that a particle has a mind of it's own, I only
> say that it may have a sensorimotive quality which is primitive like
> charge or spin, but that this quality scales up in a different way
> than quantitative properties.

Scales up how? How is this sensormotive quality detected or measured? What's its
operational definition? How is it different from connective complexity of processes -
which is the quality that most people think gives a brain its special quality.

> The brain is very special *to us* and I
> suspect that it is pretty special relatively speaking as far as
> processes in the Cosmos. It's not special because it has awareness
> though, it's just the degree to which that awareness is elaborated and
> concentrated.
>
>> The problem with electrons or other simple systems is that while we have complete
>> access to their 3-p variables, we don't have access to their hypothetical other variables;
>> the ones we call 1-p when referring to humans. So when all the silver atoms in a
>> Stern-Gerlach do just as we predict, it can be claimed that they all had the same 1-p
>> variables and that's why the 3-p variables were sufficient to predict their behavior.
> Why is that a problem?

It's a problem because it makes your theory untestable for anything except a human brain.


>
>> So the only way I see to test this theory, even in principle, would be to observe Craig's
>> brain at a very low level while having him report his experiences (at least to himself)
>> and show that his experiences and his brain states were not one-to-one.
> No, I'm not saying that 1-p and 3-p are not synchronized, they are
> synchronized, but that doesn't mean that voluntary choices supervene
> on default neurological processes. Look at how our diaphragm works. We
> can voluntarily control our breathing to a certain extent, but there
> are involuntary default behaviors as well. This does not mean that we
> can't decide to hold our breath or that it can only be our body which
> is doing the deciding. How do you explain the appearance of voluntary
> control of our body?

I appears voluntary because we can't perceive the brain processes that produce the
action. So when the action comports with the brains usual pathways we feel "we did it
*voluntarily". Which is the point of David Eagleman's experiment with shifting a person's
time calibration. If he shifted it so that the result appeared earlier (in subjective
time) than the voluntary act then the person no long felt that they had done it. It
happened without them.

>
>> Of course this is
>> probably impossible with current technology. Observing the brain at a coarse grained
>> level leaves open the possibility that one is just missing the 3-p variables that you show
>> the relationship to be one-to-one.
>>
>> So I'd say that until someone thinks of an empirical test for this "soul theory",
>> discussing it is a waste of bandwidth.
> Way to argue from authority. "Your thoughts are a waste of everyone's
> time unless I think that they can be proved to my satisfaction".

I didn't say anything about which outcome would satisfy me. I said it's a waste of time
to argue a theory that cannot be tested.

Brent

Stathis Papaioannou

unread,
Oct 4, 2011, 9:32:57 PM10/4/11
to everyth...@googlegroups.com
On Wed, Oct 5, 2011 at 5:59 AM, meekerdb <meek...@verizon.net> wrote:

> This goes by the name "causal completeness"; the idea that the 3-p
> observable state at t is sufficient to predict the state at t+dt.  Craig
> wants add to this that there is additional information which is not 3-p
> observable and which makes a difference, so that the state at t+dt depends
> not just on the 3-p observables at t, but also on some additional
> "sensorimotive" variables.  If you assume these variables are not
> independent of the 3-p observables, then this is just panpsychic version of
> consciousness supervening on the 3-p states.  They are redundant in the
> informational sense.   If you assume they are independent of the 3-p
> variables and yet make a difference in the time evolution of the state then
> it means the predictions based on the 3-p observables will fail, i.e. the
> laws of physics and chemistry will be violated.
>
> Of course this violation maybe hard to detect in something very complicated
> like a brain; but Craig's theory doesn't seem to assume the brain is special
> in that respect and even a single electron supposedly has these extra,
> unobservable variables, i.e. a mind of its own.  The problem with electrons
> or other simple systems is that while we have complete access to their 3-p
> variables, we don't have access to their hypothetical other variables; the
> ones we call 1-p when referring to humans.  So when all the silver atoms in
> a Stern-Gerlach do just as we predict, it can be claimed that they all had
> the same 1-p variables and that's why the 3-p variables were sufficient to
> predict their behavior.

That's a bit like saying there are fairies at the bottom of the garden
but they hide whenever we look for them. According to Craig, the 1-p
influence (which is equivalent to an immaterial soul) is ubiquitous in
living things, and possibly in other things as well. I think if no
scientist has ever seen evidence of this ubiquitous influence that is
good reason to say that it doesn't exist. In fact, Craig himself
denies that his theory would manifest as violation of physical law,
and is therefore inconsistent.

> So the only way I see to test this theory, even in principle, would be to
> observe Craig's brain at a very low level while having him report his
> experiences (at least to himself) and show that his experiences and his
> brain states were not one-to-one.  Of course this is probably impossible
> with current technology.  Observing the brain at a coarse grained level
> leaves open the possibility that one is just missing the 3-p variables that
> you show the relationship to be one-to-one.
>
> So I'd say that until someone thinks of an empirical test for this "soul
> theory", discussing it is a waste of bandwidth.


--
Stathis Papaioannou

Craig Weinberg

unread,
Oct 4, 2011, 11:14:39 PM10/4/11
to Everything List
On Oct 4, 8:46 pm, meekerdb <meeke...@verizon.net> wrote:
> On 10/4/2011 5:15 PM, Craig Weinberg wrote:
>
> > On Oct 4, 2:59 pm, meekerdb<meeke...@verizon.net> wrote:
>
> >> This goes by the name "causal completeness"; the idea that the 3-p observable state at t
> >> is sufficient to predict the state at t+dt. Craig wants add to this that there is
> >> additional information which is not 3-p observable and which makes a difference, so that
> >> the state at t+dt depends not just on the 3-p observables at t, but also on some
> >> additional "sensorimotive" variables. If you assume these variables are not independent
> >> of the 3-p observables, then this is just panpsychic version of consciousness supervening
> >> on the 3-p states. They are redundant in the informational sense. If you assume they
> >> are independent of the 3-p variables and yet make a difference in the time evolution of
> >> the state then it means the predictions based on the 3-p observables will fail, i.e. the
> >> laws of physics and chemistry will be violated.
> > Why would they have to be either completely dependent or independent?
>
> Did I use the word "completely"?

You're reducing the possibilities to two mutually exclusive impossible
options, so if 'completely' is not implied then you aren't really
saying anything.

>
> > I've given several examples demonstrating how we routinely exercise
> > voluntary control over parts of our minds, bodies, and environment
> > while at the same time being involuntarily controlled by those same
> > influences, often at the same time. This isn't a theory, this is the
> > raw data set.
>
> No it's not. In your examples of voluntary control you don't know what your brain is
> doing. So you can't know whether you "voluntary" action was entirely caused by physical
> precursors or whether their was some effect from libertarian free-will.

What difference does it make what your brain is doing to be able to
say that you are voluntarily controlling the words that you type here?

>
>
>
> > If it were the case that the 3p and 1p were completely independent,
> > then you would have ghosts jumping around into aluminum cans and
> > walking around singing, and if they were completely dependent then
> > there would be no point in being able to differentiate between
> > voluntary and involuntary control of our mind, body, and environment.
>
> Exactly the point of compatibilist free-will.

What does that label add to this conversation?

>
> > Such an illusory distinction would not only be redundant but it would
> > have no ontological basis to even be able to come into being or be
> > conceivable. It would be like an elephant growing a TV set out of it's
> > trunk to distract it from being an elephant.
>
> Or pulling another meaningless example out of the nether regions.

Why meaningless? I'm pointing out that the illusion of free will in a
deterministic universe would be not merely puzzling but fantastically
absurd. Your criticism is arbitrary.

>
>
>
> > Since neither of those two cases is possible, I propose, as I have
> > repeatedly proposed, that the 3p and 1p are in fact part of the same
> > essential reality in which they overlap, but that they each extent in
> > different topological directions;
>
> What's a topological direction?

matter elaborates discretely across space, energy elaborates
cumulatively through time.

>
> > specifically, 3p into matter, public
> > space, electromagnetism, entropy, and relativity, and 1p into energy,
> > private time, sensorimotive, significance, and perception.
>
> "3p overlaps into entropy"!? Reads like gibberish to me.

3-p doesn't overlap entropy, 3-p is entropic. 1-p is syntropic. The
overlap is the 'here and now'. I'm not sure that it matters what I say
though, you're mainly just auditing my responses for technicalities so
that you can get a feeling of 'winning' a debate. It's a sensorimotive
circuit. A feeling that you are seeking which requires a particular
kind of experience to satisfy it. If I could offer you a drug instead
that would stimulate the precise neural pathways involved in feeling
that you had proved me wrong in an objective way, would that be
satisfying to you? Would there be no difference in being right versus
having your physical precursors to feeling right get tweaked? Isn't
that what you are saying, that in fact this discussion is nothing but
brain drugs with no free will determining our opinions? Isn't being
right or wrong just a matter of biochemistry?

>
>
>
> > No laws of physics are broken by consciousness, but it is very
> > confusing because our only example of consciousness is human
> > consciousness, which is a multi-trillion cell awareness.
>
> Exactly what I said. In fact one's only example of consciousness is their own. The
> consciousness of other humans is an inference.

I agree. Although I would qualify the inference. It's more of an
educated inference. I'm making a different point with it though. I'm
saying there is a problem with our default assumptions about micro
brain mechanisms correlating with macro psychological experiences.

>
> > The trick is
> > to realize that you cannot directly correlate our experience of
> > consciousness with the 3-p cellular phenomenology, but to only
> > correlate it with the 3-p behavior of the brain as a whole.
>
> That's the experimental question, and you don't know the answer.

I don't claim to have the answer, but I have a hypothesis, which has
to be understood using this way of looking at the mind and brain.

>
> > That's the
> > starting point. If you are going to try to understand what a movie is
> > about, you have to look at the whole images of the movie, and not
> > focus on the pixels of the screen or the mechanics of pixel
> > illumination to guide your interpretation. There is no human
> > consciousness at that low level. There may be sensorimotive 1-p
> > phenomenology there, and I think that there is, but we can't prove it
> > now. What we can prove is there in 3-p would only relate to that low
> > level 1-p which is unknown to us.
>
> > My proposition is that our 1-p consciousness builds from lower level 1-
> > p awareness and higher level 1-p semantic environmental influences,
> > like cultural ideas, family traditions, etc.
>
> But that is entirely untestable since we have no access to those 1-p consciousnesses.
> Cultural ideas, family traditions are 3-p observables.

We have access to our own 1-p consciousness. What else do we need?
Cultural ideas and family traditions are not 3-p observable - they
have no melting point or specific gravity, they occupy no location -
they must be inferred by 1-p interpretation/participation/consensus.

>
> > It is not predictable
> > from 3-p appearances alone, but not because it breaks the laws of
> > physics. Physics has nothing to say about what particular patterns
> > occur in the brain as a whole.
>
> Sure it does - unless magic happens.

Consciousness happens. Physics has nothing to say about what the
content of any particular brain's thoughts should be. If give you a
book about Marxism then you will have thoughts about Marxism - not
about whatever physical modeling of a brain of your genetic makeup
would suggest.

>
> > There is no relevant biochemical
> > difference between a one thought and another that could make it
> > impossible physically,
>
> So you say. But I think there is. If you think of an elephant there is something
> biochemical happening that makes it not a thought about a giraffe. So when you read
> "elephant" it is impossible to think of a giraffe at that moment.

Nah, you can easily be hypnotized to think of a giraffe whenever you
see the word elephant. I don't understand what it would prove anyways.
Each person reading the word for elephant in their own language will
have different biochemical happenings which could not be proactively
tied to elephantness or giraffeness if you didn't already have a
correlation established beforehand from first hand anecdotal reports
of subjective content. There is no predictive route from the
biochemistry to zoological linguistic complexes and no role for any
such complexes to play in the observed biochemistry.

>
> > just as there is no sequence of illuminated
> > pixels that is preferred by a TV screen, or electronics, or physics.
>
> >> Of course this violation maybe hard to detect in something very complicated like a brain;
> >> but Craig's theory doesn't seem to assume the brain is special in that respect and even a
> >> single electron supposedly has these extra, unobservable variables, i.e. a mind of its
> >> own.
> > No. I have never said that a particle has a mind of it's own, I only
> > say that it may have a sensorimotive quality which is primitive like
> > charge or spin, but that this quality scales up in a different way
> > than quantitative properties.
>
> Scales up how?

Qualitatively. Richer, deeper, more meaningful qualia. Where else does
it come from? A metaphysical dimension?

How is this sensormotive quality detected or measured?

It is felt. It is experienced first hand as qualia.

>What's its
> operational definition?

What form do you want it in? Defined in terms of what? Sensorimotive
phenomena is a universal primitive. It is the capacity for
participatory being - to detect and respond to changing interior and
exterior conditions.

>How is it different from connective complexity of processes -
> which is the quality that most people think gives a brain its special quality.

Without sensorimotive qualities, those processes cannot be experienced
by anything. What knows the difference between simplicity and
complexity if you have no awareness to distinguish it?

>
> > The brain is very special *to us* and I
> > suspect that it is pretty special relatively speaking as far as
> > processes in the Cosmos. It's not special because it has awareness
> > though, it's just the degree to which that awareness is elaborated and
> > concentrated.
>
> >> The problem with electrons or other simple systems is that while we have complete
> >> access to their 3-p variables, we don't have access to their hypothetical other variables;
> >> the ones we call 1-p when referring to humans. So when all the silver atoms in a
> >> Stern-Gerlach do just as we predict, it can be claimed that they all had the same 1-p
> >> variables and that's why the 3-p variables were sufficient to predict their behavior.
> > Why is that a problem?
>
> It's a problem because it makes your theory untestable for anything except a human brain.

Why would you need more than a human brain? You just have to turn it
into a laboratory. Figure out how conjoined twins who share the same
brain do that, and then conjoin your brain with other kinds of brains,
tissues, cells, molecules. It's a lot easier than trying to copy
someone's brain by duplicating the position of every atom in their
neurons.

>
>
>
> >> So the only way I see to test this theory, even in principle, would be to observe Craig's
> >> brain at a very low level while having him report his experiences (at least to himself)
> >> and show that his experiences and his brain states were not one-to-one.
> > No, I'm not saying that 1-p and 3-p are not synchronized, they are
> > synchronized, but that doesn't mean that voluntary choices supervene
> > on default neurological processes. Look at how our diaphragm works. We
> > can voluntarily control our breathing to a certain extent, but there
> > are involuntary default behaviors as well. This does not mean that we
> > can't decide to hold our breath or that it can only be our body which
> > is doing the deciding. How do you explain the appearance of voluntary
> > control of our body?
>
> I appears voluntary because we can't perceive the brain processes that produce the
> action. So when the action comports with the brains usual pathways we feel "we did it
> *voluntarily".

That doesn't explain the appearance at all. You're just acknowledging
that there is a feeling despite your not knowing (or caring) why it's
necessary.

>Which is the point of David Eagleman's experiment with shifting a person's
> time calibration. If he shifted it so that the result appeared earlier (in subjective
> time) than the voluntary act then the person no long felt that they had done it. It
> happened without them.

There is no question that our feeling of free will as a unified
phenomenon is limited to a particular scale of time, but so what? We
know that our consciousness is multi-threaded so that many awarenesses
compete for attention. That takes time. The threads that are involved
with tying the perceptions together are going to lag behind the flow
of sensations because you are slicing the time frame too thin to
reveal the minimum thickness of human consciousness. That doesn't mean
that our voluntary actions are not voluntary. It just means that our
psyche is very complex and arriving at a consensus can only happen so
fast. Measurements faster than that are going to look strange, just as
freezing a movie mid frame is going to give you some strange artifacts
and blurs that defy ordinary expectations of what a movie should look
like.
>
>
>
> >> Of course this is
> >> probably impossible with current technology. Observing the brain at a coarse grained
> >> level leaves open the possibility that one is just missing the 3-p variables that you show
> >> the relationship to be one-to-one.
>
> >> So I'd say that until someone thinks of an empirical test for this "soul theory",
> >> discussing it is a waste of bandwidth.
> > Way to argue from authority. "Your thoughts are a waste of everyone's
> > time unless I think that they can be proved to my satisfaction".
>
> I didn't say anything about which outcome would satisfy me. I said it's a waste of time
> to argue a theory that cannot be tested.

It can be tested, just maybe not with the technology we are using. You
could build instruments which use living tissue to test these ideas.
Replace someone's eye with a petri-dish retina that can serve as a
laboratory for different types of cells to see if vision can be
recreated out of other kinds of tissue, see if you get new colors,
etc. There's all kinds of ways this theory could be tested, I'm sure
some of which would require less innovation. You just have to get a
few people who are curious about the ideas rather than curious about
ways to defend the status quo against it.

Craig

Craig Weinberg

unread,
Oct 4, 2011, 11:24:42 PM10/4/11
to Everything List
On Oct 4, 9:32 pm, Stathis Papaioannou <stath...@gmail.com> wrote:
Wrong. I have been very consistent in my position that it is a
category error to conceive of the 1-p influence as a pseudo-substance.
It is not a 'stuff' that's in everything, any more than volts are a
stuff that's in everything. it's the opposite of a stuff - it is what
it's like to be stuff and to be surrounded by stuff.

> is ubiquitous in
> living things, and possibly in other things as well. I think if no
> scientist has ever seen evidence of this ubiquitous influence that is
> good reason to say that it doesn't exist.

No scientist has ever seen anything other than evidence of
sensorimotive perception. That is all that we or anything can ever
see. I agree that it doesn't exist in the sense of it occupying space
like matter does, it insists and it occupies matter though time.

> In fact, Craig himself
> denies that his theory would manifest as violation of physical law,
> and is therefore inconsistent.

There is no inconsistency. You're just not understanding what I'm
saying because you are only willing to think in terms of reactive
strategies for neutralizing the threat to your common sense (which is
a cumulative entanglement of autobiographical experiences and
understandings, interpretations of cultural traditions and
perspectives, etc).

Craig

meekerdb

unread,
Oct 5, 2011, 12:23:15 AM10/5/11
to everyth...@googlegroups.com
On 10/4/2011 8:14 PM, Craig Weinberg wrote:
> On Oct 4, 8:46 pm, meekerdb<meeke...@verizon.net> wrote:
>> On 10/4/2011 5:15 PM, Craig Weinberg wrote:
>>
>>> On Oct 4, 2:59 pm, meekerdb<meeke...@verizon.net> wrote:
>>>> This goes by the name "causal completeness"; the idea that the 3-p observable state at t
>>>> is sufficient to predict the state at t+dt. Craig wants add to this that there is
>>>> additional information which is not 3-p observable and which makes a difference, so that
>>>> the state at t+dt depends not just on the 3-p observables at t, but also on some
>>>> additional "sensorimotive" variables. If you assume these variables are not independent
>>>> of the 3-p observables, then this is just panpsychic version of consciousness supervening
>>>> on the 3-p states. They are redundant in the informational sense. If you assume they
>>>> are independent of the 3-p variables and yet make a difference in the time evolution of
>>>> the state then it means the predictions based on the 3-p observables will fail, i.e. the
>>>> laws of physics and chemistry will be violated.
>>> Why would they have to be either completely dependent or independent?
>> Did I use the word "completely"?
> You're reducing the possibilities to two mutually exclusive impossible
> options, so if 'completely' is not implied then you aren't really
> saying anything.

I wrote "not independent" and "independent". Those are mutually exclusive in any logic I
know of. But "not independent" is not the same as "completely dependent". Try reading
what is written.

>
>>> I've given several examples demonstrating how we routinely exercise
>>> voluntary control over parts of our minds, bodies, and environment
>>> while at the same time being involuntarily controlled by those same
>>> influences, often at the same time. This isn't a theory, this is the
>>> raw data set.
>> No it's not. In your examples of voluntary control you don't know what your brain is
>> doing. So you can't know whether you "voluntary" action was entirely caused by physical
>> precursors or whether their was some effect from libertarian free-will.
> What difference does it make what your brain is doing to be able to
> say that you are voluntarily controlling the words that you type here?
>
>>
>>
>>> If it were the case that the 3p and 1p were completely independent,
>>> then you would have ghosts jumping around into aluminum cans and
>>> walking around singing, and if they were completely dependent then
>>> there would be no point in being able to differentiate between
>>> voluntary and involuntary control of our mind, body, and environment.
>> Exactly the point of compatibilist free-will.
> What does that label add to this conversation?

It makes the discussion precise; instead of wandering around analogies and metaphors.

>
>>> Such an illusory distinction would not only be redundant but it would
>>> have no ontological basis to even be able to come into being or be
>>> conceivable. It would be like an elephant growing a TV set out of it's
>>> trunk to distract it from being an elephant.
>> Or pulling another meaningless example out of the nether regions.
> Why meaningless? I'm pointing out that the illusion of free will in a
> deterministic universe would be not merely puzzling but fantastically
> absurd. Your criticism is arbitrary.

You're "pointing out" the very thing that is in dispute. Your assertion that is absurd is
not a substitute for saying how it could be tested and found false.

>
>>
>>
>>> Since neither of those two cases is possible, I propose, as I have
>>> repeatedly proposed, that the 3p and 1p are in fact part of the same
>>> essential reality in which they overlap, but that they each extent in
>>> different topological directions;
>> What's a topological direction?
> matter elaborates discretely across space, energy elaborates
> cumulatively through time.

A creative use of "elaborates"....does not parse.

>
>>> specifically, 3p into matter, public
>>> space, electromagnetism, entropy, and relativity, and 1p into energy,
>>> private time, sensorimotive, significance, and perception.
>> "3p overlaps into entropy"!? Reads like gibberish to me.
> 3-p doesn't overlap entropy, 3-p is entropic. 1-p is syntropic. The
> overlap is the 'here and now'. I'm not sure that it matters what I say
> though, you're mainly just auditing my responses for technicalities so
> that you can get a feeling of 'winning' a debate. It's a sensorimotive
> circuit. A feeling that you are seeking which requires a particular
> kind of experience to satisfy it. If I could offer you a drug instead
> that would stimulate the precise neural pathways involved in feeling
> that you had proved me wrong in an objective way, would that be
> satisfying to you? Would there be no difference in being right versus
> having your physical precursors to feeling right get tweaked? Isn't
> that what you are saying, that in fact this discussion is nothing but
> brain drugs with no free will determining our opinions? Isn't being
> right or wrong just a matter of biochemistry?

No, it's a matter of passing an empirical test.

>
>>
>>
>>> No laws of physics are broken by consciousness, but it is very
>>> confusing because our only example of consciousness is human
>>> consciousness, which is a multi-trillion cell awareness.
>> Exactly what I said. In fact one's only example of consciousness is their own. The
>> consciousness of other humans is an inference.
> I agree. Although I would qualify the inference. It's more of an
> educated inference. I'm making a different point with it though. I'm
> saying there is a problem with our default assumptions about micro
> brain mechanisms correlating with macro psychological experiences.

Fine. Think of a test that would prove the competing theory wrong.

>
>>> The trick is
>>> to realize that you cannot directly correlate our experience of
>>> consciousness with the 3-p cellular phenomenology, but to only
>>> correlate it with the 3-p behavior of the brain as a whole.
>> That's the experimental question, and you don't know the answer.
> I don't claim to have the answer, but I have a hypothesis, which has
> to be understood using this way of looking at the mind and brain.
>
>>> That's the
>>> starting point. If you are going to try to understand what a movie is
>>> about, you have to look at the whole images of the movie, and not
>>> focus on the pixels of the screen or the mechanics of pixel
>>> illumination to guide your interpretation. There is no human
>>> consciousness at that low level. There may be sensorimotive 1-p
>>> phenomenology there, and I think that there is, but we can't prove it
>>> now. What we can prove is there in 3-p would only relate to that low
>>> level 1-p which is unknown to us.
>>> My proposition is that our 1-p consciousness builds from lower level 1-
>>> p awareness and higher level 1-p semantic environmental influences,
>>> like cultural ideas, family traditions, etc.
>> But that is entirely untestable since we have no access to those 1-p consciousnesses.
>> Cultural ideas, family traditions are 3-p observables.
> We have access to our own 1-p consciousness. What else do we need?

We need to show that it is not entirely determined by the the physical evolution of the brain.

> Cultural ideas and family traditions are not 3-p observable - they
> have no melting point or specific gravity, they occupy no location -
> they must be inferred by 1-p interpretation/participation/consensus.

Everything is inferred from 1-p experiences. But cultural ideas and traditions are
public; they can be observed by more than one person and they can reach intersubjective
agreement just like any other facts about the world.

>
>>> It is not predictable
>>> from 3-p appearances alone, but not because it breaks the laws of
>>> physics. Physics has nothing to say about what particular patterns
>>> occur in the brain as a whole.
>> Sure it does - unless magic happens.
> Consciousness happens. Physics has nothing to say about what the
> content of any particular brain's thoughts should be. If give you a
> book about Marxism then you will have thoughts about Marxism - not
> about whatever physical modeling of a brain of your genetic makeup
> would suggest.

Do you think a book about Marxism is not physical and reading it is not a physical
process? What is your evidence for this. That's the whole question: Is thinking a purely
physical process or does it include some extra-physical part.

An opertaional definition is in terms of operations that will detect or measure something.

> Sensorimotive
> phenomena is a universal primitive. It is the capacity for
> participatory being - to detect and respond to changing interior and
> exterior conditions.
>
>> How is it different from connective complexity of processes -
>> which is the quality that most people think gives a brain its special quality.
> Without sensorimotive qualities, those processes cannot be experienced
> by anything. What knows the difference between simplicity and
> complexity if you have no awareness to distinguish it?

If you have no awareness then you don't know anything. It doesn't follow that everything
depends on your awareness of it.

what's the operational definition of "voluntary". Does it exclude "determined by physics"?

> It just means that our
> psyche is very complex and arriving at a consensus can only happen so
> fast. Measurements faster than that are going to look strange, just as
> freezing a movie mid frame is going to give you some strange artifacts
> and blurs that defy ordinary expectations of what a movie should look
> like.
>>
>>
>>>> Of course this is
>>>> probably impossible with current technology. Observing the brain at a coarse grained
>>>> level leaves open the possibility that one is just missing the 3-p variables that you show
>>>> the relationship to be one-to-one.
>>>> So I'd say that until someone thinks of an empirical test for this "soul theory",
>>>> discussing it is a waste of bandwidth.
>>> Way to argue from authority. "Your thoughts are a waste of everyone's
>>> time unless I think that they can be proved to my satisfaction".
>> I didn't say anything about which outcome would satisfy me. I said it's a waste of time
>> to argue a theory that cannot be tested.
> It can be tested, just maybe not with the technology we are using. You
> could build instruments which use living tissue to test these ideas.
> Replace someone's eye with a petri-dish retina that can serve as a
> laboratory for different types of cells to see if vision can be
> recreated out of other kinds of tissue, see if you get new colors,
> etc. There's all kinds of ways this theory could be tested,

How would you know if it perceived new colors? You couldn't ask it, and you have no
access to its qualia (if it has any).

Brent

meekerdb

unread,
Oct 5, 2011, 12:27:30 AM10/5/11
to everyth...@googlegroups.com

Right.

> According to Craig, the 1-p
> influence (which is equivalent to an immaterial soul) is ubiquitous in
> living things, and possibly in other things as well.

But he doesn't say what effect is has. It could be anything and hence could explain any
experimental result.

Brent

Quentin Anciaux

unread,
Oct 5, 2011, 2:54:27 AM10/5/11
to everyth...@googlegroups.com
Hi,

2011/10/5 Craig Weinberg <whats...@gmail.com>

But reading a book is a physical process, photons from the book hit your retina, which in turns generate electrical current through the nerves to your brain which acts accordingly to its state and the new input.

So If I have a model of a brain in the same state and gives it the same input, It'll think about Marxism and not whatever whatever whatever...

I don't know where your idea of having the model of a thing could help you predict inputs outside of it...
 
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Craig Weinberg

unread,
Oct 5, 2011, 9:33:03 AM10/5/11
to Everything List
On Oct 5, 12:23 am, meekerdb <meeke...@verizon.net> wrote:
> On 10/4/2011 8:14 PM, Craig Weinberg wrote:
>
> > On Oct 4, 8:46 pm, meekerdb<meeke...@verizon.net> wrote:
> >> On 10/4/2011 5:15 PM, Craig Weinberg wrote:
>
> >>> On Oct 4, 2:59 pm, meekerdb<meeke...@verizon.net> wrote:
> >>>> This goes by the name "causal completeness"; the idea that the 3-p observable state at t
> >>>> is sufficient to predict the state at t+dt. Craig wants add to this that there is
> >>>> additional information which is not 3-p observable and which makes a difference, so that
> >>>> the state at t+dt depends not just on the 3-p observables at t, but also on some
> >>>> additional "sensorimotive" variables. If you assume these variables are not independent
> >>>> of the 3-p observables, then this is just panpsychic version of consciousness supervening
> >>>> on the 3-p states. They are redundant in the informational sense. If you assume they
> >>>> are independent of the 3-p variables and yet make a difference in the time evolution of
> >>>> the state then it means the predictions based on the 3-p observables will fail, i.e. the
> >>>> laws of physics and chemistry will be violated.
> >>> Why would they have to be either completely dependent or independent?
> >> Did I use the word "completely"?
> > You're reducing the possibilities to two mutually exclusive impossible
> > options, so if 'completely' is not implied then you aren't really
> > saying anything.
>
> I wrote "not independent" and "independent". Those are mutually exclusive in any logic I
> know of. But "not independent" is not the same as "completely dependent". Try reading
> what is written.

I did read what you wrote. You said we only have two options, either
1p and 3p are independent or not independent. I'm countering that by
saying that they are neither completely independent nor dependent, so
there is no reason to go forward with the assumption that you have to
pick one of your two impossible conclusions.

>
>
>
> >>> I've given several examples demonstrating how we routinely exercise
> >>> voluntary control over parts of our minds, bodies, and environment
> >>> while at the same time being involuntarily controlled by those same
> >>> influences, often at the same time. This isn't a theory, this is the
> >>> raw data set.
> >> No it's not. In your examples of voluntary control you don't know what your brain is
> >> doing. So you can't know whether you "voluntary" action was entirely caused by physical
> >> precursors or whether their was some effect from libertarian free-will.
> > What difference does it make what your brain is doing to be able to
> > say that you are voluntarily controlling the words that you type here?
>
> >>> If it were the case that the 3p and 1p were completely independent,
> >>> then you would have ghosts jumping around into aluminum cans and
> >>> walking around singing, and if they were completely dependent then
> >>> there would be no point in being able to differentiate between
> >>> voluntary and involuntary control of our mind, body, and environment.
> >> Exactly the point of compatibilist free-will.
> > What does that label add to this conversation?
>
> It makes the discussion precise; instead of wandering around analogies and metaphors.

I think that metaphors reveal the truth by letting the thinker make
sense of it for themselves, while labels or intended intimidate and
prejudice the thinker to conceal the truth.

>
>
>
> >>> Such an illusory distinction would not only be redundant but it would
> >>> have no ontological basis to even be able to come into being or be
> >>> conceivable. It would be like an elephant growing a TV set out of it's
> >>> trunk to distract it from being an elephant.
> >> Or pulling another meaningless example out of the nether regions.
> > Why meaningless? I'm pointing out that the illusion of free will in a
> > deterministic universe would be not merely puzzling but fantastically
> > absurd. Your criticism is arbitrary.
>
> You're "pointing out" the very thing that is in dispute. Your assertion that is absurd is
> not a substitute for saying how it could be tested and found false.

I'm stating that logically to think that awareness would or could
exist in a deterministic universe would be absurd. Since we know for a
fact that awareness exists but we don't know that the universe is
deterministic, why do you find my position to be the unfalsifiable
one?

>
>
>
> >>> Since neither of those two cases is possible, I propose, as I have
> >>> repeatedly proposed, that the 3p and 1p are in fact part of the same
> >>> essential reality in which they overlap, but that they each extent in
> >>> different topological directions;
> >> What's a topological direction?
> > matter elaborates discretely across space, energy elaborates
> > cumulatively through time.
>
> A creative use of "elaborates"....does not parse.

ok, matter and energy 'appear to us as being involved in a consistent
range and variety of persistent forms and repeating and novel
processes'

>
>
>
> >>> specifically, 3p into matter, public
> >>> space, electromagnetism, entropy, and relativity, and 1p into energy,
> >>> private time, sensorimotive, significance, and perception.
> >> "3p overlaps into entropy"!? Reads like gibberish to me.
> > 3-p doesn't overlap entropy, 3-p is entropic. 1-p is syntropic. The
> > overlap is the 'here and now'. I'm not sure that it matters what I say
> > though, you're mainly just auditing my responses for technicalities so
> > that you can get a feeling of 'winning' a debate. It's a sensorimotive
> > circuit. A feeling that you are seeking which requires a particular
> > kind of experience to satisfy it. If I could offer you a drug instead
> > that would stimulate the precise neural pathways involved in feeling
> > that you had proved me wrong in an objective way, would that be
> > satisfying to you? Would there be no difference in being right versus
> > having your physical precursors to feeling right get tweaked? Isn't
> > that what you are saying, that in fact this discussion is nothing but
> > brain drugs with no free will determining our opinions? Isn't being
> > right or wrong just a matter of biochemistry?
>
> No, it's a matter of passing an empirical test.

How is an empirical test not a matter of biochemistry? Can I not
induce the feeling that something has passed an empirical test in any
person or group of people with the right neurological agents?

>
>
>
> >>> No laws of physics are broken by consciousness, but it is very
> >>> confusing because our only example of consciousness is human
> >>> consciousness, which is a multi-trillion cell awareness.
> >> Exactly what I said. In fact one's only example of consciousness is their own. The
> >> consciousness of other humans is an inference.
> > I agree. Although I would qualify the inference. It's more of an
> > educated inference. I'm making a different point with it though. I'm
> > saying there is a problem with our default assumptions about micro
> > brain mechanisms correlating with macro psychological experiences.
>
> Fine. Think of a test that would prove the competing theory wrong.

What's the competing theory? "Someday we will find a connection?"

>
>
>
> >>> The trick is
> >>> to realize that you cannot directly correlate our experience of
> >>> consciousness with the 3-p cellular phenomenology, but to only
> >>> correlate it with the 3-p behavior of the brain as a whole.
> >> That's the experimental question, and you don't know the answer.
> > I don't claim to have the answer, but I have a hypothesis, which has
> > to be understood using this way of looking at the mind and brain.
>
> >>> That's the
> >>> starting point. If you are going to try to understand what a movie is
> >>> about, you have to look at the whole images of the movie, and not
> >>> focus on the pixels of the screen or the mechanics of pixel
> >>> illumination to guide your interpretation. There is no human
> >>> consciousness at that low level. There may be sensorimotive 1-p
> >>> phenomenology there, and I think that there is, but we can't prove it
> >>> now. What we can prove is there in 3-p would only relate to that low
> >>> level 1-p which is unknown to us.
> >>> My proposition is that our 1-p consciousness builds from lower level 1-
> >>> p awareness and higher level 1-p semantic environmental influences,
> >>> like cultural ideas, family traditions, etc.
> >> But that is entirely untestable since we have no access to those 1-p consciousnesses.
> >> Cultural ideas, family traditions are 3-p observables.
> > We have access to our own 1-p consciousness. What else do we need?
>
> We need to show that it is not entirely determined by the the physical evolution of the brain.

Wouldn't we first need a plausible mechanism for physical evolution in
the brain to lead to 1-p awareness? It's not like growing sharper
teeth, there's nothing that can just be quantitatively augmented or
diminished to suddenly make consciousness happen in something that has
no possibility of being conscious. The possibility of consciousness in
the first place is the mystery that materialism and determinism have
to address, not that the fact of consciousness needs to account for
itself in physical terms.

>
> > Cultural ideas and family traditions are not 3-p observable - they
> > have no melting point or specific gravity, they occupy no location -
> > they must be inferred by 1-p interpretation/participation/consensus.
>
> Everything is inferred from 1-p experiences. But cultural ideas and traditions are
> public; they can be observed by more than one person and they can reach intersubjective
> agreement just like any other facts about the world.
>

They are public to the members of the particular culture only. That's
not 3-p, it's 1-p plural.

>
>
> >>> It is not predictable
> >>> from 3-p appearances alone, but not because it breaks the laws of
> >>> physics. Physics has nothing to say about what particular patterns
> >>> occur in the brain as a whole.
> >> Sure it does - unless magic happens.
> > Consciousness happens. Physics has nothing to say about what the
> > content of any particular brain's thoughts should be. If give you a
> > book about Marxism then you will have thoughts about Marxism - not
> > about whatever physical modeling of a brain of your genetic makeup
> > would suggest.
>
> Do you think a book about Marxism is not physical and reading it is not a physical
> process? What is your evidence for this.

Because if the book is written in Russian then you won't (I'm
assuming) be able to read it. If you learn to read Russian then it
becomes a book about Marxism to you, but with no changes to the ink or
pages in the book. A book is a physical thing, but 'about Marxism' is
a 1-p subjective experience.

> That's the whole question: Is thinking a purely
> physical process or does it include some extra-physical part.

That's your question, not mine. I see the physical and experiential in
clear and specific relationship of mutual interdependence. Thinking is
not extra-physical, it is entero-physical.
Sensorimotive phenomena is the ability to privately perceive,
experience, and intentionally participate in that experience.

>
> > Sensorimotive
> > phenomena is a universal primitive. It is the capacity for
> > participatory being - to detect and respond to changing interior and
> > exterior conditions.
>
> >> How is it different from connective complexity of processes -
> >> which is the quality that most people think gives a brain its special quality.
> > Without sensorimotive qualities, those processes cannot be experienced
> > by anything. What knows the difference between simplicity and
> > complexity if you have no awareness to distinguish it?
>
> If you have no awareness then you don't know anything. It doesn't follow that everything
> depends on your awareness of it.

No, but it doesn't follow that anything can exist independently of
awareness in general either. If I have no awareness then I don't know
anything, but if the universe has no awareness then it doesn't exist
(in what form could it be said to exist?)
Voluntary means that we perceive a coherent and consistent qualitative
difference from actions which are involuntary. We feel that we are
doing something consciously, as opposed to digesting something
automatically and unconsciously.

>
> > It just means that our
> > psyche is very complex and arriving at a consensus can only happen so
> > fast. Measurements faster than that are going to look strange, just as
> > freezing a movie mid frame is going to give you some strange artifacts
> > and blurs that defy ordinary expectations of what a movie should look
> > like.
>
> >>>> Of course this is
> >>>> probably impossible with current technology. Observing the brain at a coarse grained
> >>>> level leaves open the possibility that one is just missing the 3-p variables that you show
> >>>> the relationship to be one-to-one.
> >>>> So I'd say that until someone thinks of an empirical test for this "soul theory",
> >>>> discussing it is a waste of bandwidth.
> >>> Way to argue from authority. "Your thoughts are a waste of everyone's
> >>> time unless I think that they can be proved to my satisfaction".
> >> I didn't say anything about which outcome would satisfy me. I said it's a waste of time
> >> to argue a theory that cannot be tested.
> > It can be tested, just maybe not with the technology we are using. You
> > could build instruments which use living tissue to test these ideas.
> > Replace someone's eye with a petri-dish retina that can serve as a
> > laboratory for different types of cells to see if vision can be
> > recreated out of other kinds of tissue, see if you get new colors,
> > etc. There's all kinds of ways this theory could be tested,
>
> How would you know if it perceived new colors? You couldn't ask it, and you have no
> access to its qualia (if it has any).

That's why I said "REPLACE SOMEONE'S EYE with a petri-dish retina".
Don't you read the words I write? ;)
Then the patient has access to the qualia of whatever we can
successfully connect to their optic nerve.

Craig

Craig Weinberg

unread,
Oct 5, 2011, 9:37:31 AM10/5/11
to Everything List
On Oct 5, 12:27 am, meekerdb <meeke...@verizon.net> wrote:

>
> > According to Craig, the 1-p
> > influence (which is equivalent to an immaterial soul) is ubiquitous in
> > living things, and possibly in other things as well.
>
> But he doesn't say what effect is has.  It could be anything and hence could explain any
> experimental result.

The effect it has is the same effect that electromagnetism or 'energy'
has. It just comes from within the thing instead of outside of it.

Craig

Craig Weinberg

unread,
Oct 5, 2011, 9:56:41 AM10/5/11
to Everything List
On Oct 5, 2:54 am, Quentin Anciaux <allco...@gmail.com> wrote:
> Hi,
>
> 2011/10/5 Craig Weinberg <whatsons...@gmail.com>

> > Consciousness happens. Physics has nothing to say about what the
> > content of any particular brain's thoughts should be. If give you a
> > book about Marxism then you will have thoughts about Marxism - not
> > about whatever physical modeling of a brain of your genetic makeup
> > would suggest.
>
> But reading a book is a physical process, photons from the book hit your
> retina, which in turns generate electrical current through the nerves to
> your brain which acts accordingly to its state and the new input.

The same process would be taking place whether you could read or not.
Your ability to make sense of the book depends on your subjective
learning of language as well as the physical process of optical
stimulation. Actually, my hypothesis includes the conjecture that
photons may not be physical phenomena at all: http://s33light.org/fauxton

>
> So If I have a model of a brain in the same state and gives it the same
> input, It'll think about Marxism and not whatever whatever whatever...

Without having a person who can tell you what they are thinking about,
how would your model tell the difference? To physics by itself, every
thought is whatever whatever whatever. The 3-p view of the brain is a-
signifying and generic. The 1-p view of the psyche is signifying and
proprietary. Your expectation that consciousness follows physics is
only based upon the a priori unexplained fact of consciousness, not
any kind of scientific insight into how consciousness could arise
physically in something. It's that expectation which needs to be
questioned, not the existence of subjectivity. The expectation of
consciousness arising automatically from physical mechanisms alone
exiles our ordinary experience of the world to some metaphysical never-
never land, an orphaned dimension without any justification or
ontology. It forces a Cartesian theater on us, but then denies it,
leaving only promissory materialism...'science will provide'. I'm not
buying it.

>
> I don't know where your idea of having the model of a thing could help you
> predict inputs outside of it...

I'm saying that you can't have a model for brain behavior for exactly
that reason. Too much of it comes from outside of it, continuously,
dynamically, interactively, intentionally, semantically, emotionally.
It's the other guys here who are saying that the brain behavior can be
predicted by biochemistry alone. I used to think that too, but I have
a better way of making sense of it now.

Craig

Quentin Anciaux

unread,
Oct 5, 2011, 10:15:41 AM10/5/11
to everyth...@googlegroups.com


2011/10/5 Craig Weinberg <whats...@gmail.com>

No they are not saying that. They are saying that a model of the brain fed with the same inputs as a real brain will act as the real brain... if it was not the case, the model would be wrong so you could not label it as a model of the brain.

They never said they could know which inputs you could have and they don't have to. They just have to know the transition rule (biochemichal/physical) of each neurons and as the brain respect physics so as the model, and so it will react the same way.

You do the same mistake with your tv pixel analogy. If I know all the transition rule of *a pixel* according to input... I can build a model of a TV that will *exactly* display the same thing as the real TV for the same inputs without knowing anything about movies/show/whatever... I don't care about movies at that level. They never said that they would explain/predict the input to the tv, just replicate the tv.

Regards,
Quentin

 
I used to think that too, but I have
a better way of making sense of it now.

Craig

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Bruno Marchal

unread,
Oct 5, 2011, 11:09:50 AM10/5/11
to everyth...@googlegroups.com

On 04 Oct 2011, at 02:51, Craig Weinberg wrote:

> On Oct 3, 11:16 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>
>>>>> I don't think that there are any arithmetical beings.
>>
>>>> In which theory?
>>
>>> In reality.
>>
>> That type of assertion is equivalent with "because God say so".
>> Reality is what we try to figure out.
>> If you know for sure what reality is, then I can do nothing, except
>> perhaps invite you to cultivate more the modest doubting attitude.
>
> Ok, let's say that I'm mathgnostic. I doubt the existence of
> arithmetic beings independent of matter.

I doubt the existence of matter being independent of arithmetic.


> I am sympathetic to
> numerological archetypes as coherent themes (or themes of coherence)
> which run through perception but to say that arithmetic spirits haunt
> empty space

Empty spaces haunt numbers dreams.

> doesn't orient me to anything true or real, it seems like
> pure fiction.

You reify spaces, so that arithmetical beings looks magic. But
arithmetical truth is out of any physical category.
A number is just not the type of entity having a location, although it
can manifest itself through locally physical realities.

> If it were the case then I would expect five milk
> bottles in a group to have the same basic function as five protons in
> a nucleus,

I don't see the logic here.


> five boron atoms in a molecule, five cells in a dish, etc.
> I just don't see any examples of causally efficacious arithmetic as an
> independent agent.

?

"Inside arithmetic" was a shorthand for "as determined through
arithmetical relation, or as observable by persons determined by
arithmetical relations (in a theoretical computer science sense).

> whereas I know for a fact that I am inside my body.

You are not. You are an immaterial being, and you have no more
location than a number, or a space. But I can explain in details why
the illusion of having a location can be very strong when person get
entangled to deep histories.


> What form of a non-comp theory are you asking for? I will try to
> comply.

Just tell us what you are assuming as primitive, and what you derive
from that.

The best form would be a first order logical axiomatization, because
those are provably independent of any "metaphysical baggage", to coin
an expression by Brian Tenneson, which sum well the importance of such
type of theory. But I know you try to avoid technical literature.


>
>>
>>
>>> So yes, arithmetic extends to the inconceivable and nonaxiomatizable
>>> but the sensorimotive gestalts underlying arithmetic are much more
>>> inconceivable and nonaxiomatizable. A greater infinity.
>>
>> Inside arithmetic *is* a bigger infinity than arithmetic. It is not
>> even nameable.
>
> If it's inside of arithmetic, how can it be bigger than itself?

Good question. It is not easy to answer it without being much more
technical. Let me just say that this is a question of internal
perspective. It is related to a phenomenon discovered by Skolem, and
which relativize the notion of cardinalities (used to measure the size
of mathematical object, and which often measure the size of the
intrinsic ignorance of the entities living in those objects).
I should stick on "from inside, arithmetic will be perceived as bigger
than from outside".

>
>>
>>
>>
>>>>>> So I see a sort of racism against machine or numbers, justified
>>>>>> by
>>>>>> unintelligible sentences.
>>
>>>>> I know that's what you see. I think that it is the shadow of your
>>>>> own
>>>>> overconfidence in the theoretical-mechanistic perspective that you
>>>>> project onto me.
>>
>>>> You are the one developing a philosophy making human with
>>>> prosthetic
>>>> brain less human, if not zombie.
>>
>>> I'm not against a prosthetic brain, I just think that it's going to
>>> have to be made of some kind of cells that live and die, which may
>>> mean that it has to be organic, which may mean that it has to be
>>> based
>>> on nucleic acids.
>>
>> Replace in the quote just above "prothetic brain" by " silicon
>> prosthetic brain".
>
> I think that if we understand that the brain itself is what is feeling
> and thinking,

You contradict what you told me in another post. You said you agree
that it is not the brain which feels or thinks, but a person.
A brain feels nothing, indeed, for obvious reason, it is even the only
organ without sensorial nerves.
I am very open to the idea that individual neurons can have a proper
inner life, like amoeba and plants, but those are not related to our
inner life. Consciousness is not a sort of sum of the consciousness of
the part of a body, if only because consciousness is not something
material at all. It has no mass, no energy, no velocity, no space-
location, nor even any time location (and I agree this might seems
counter-intuitive).
That is why it is far simpler to explain consciousness from the
number, and then physicalness from sharable coherent deep dreams, than
the inverse.


> rather than some disembodied computational function,
> then we have to consider that the material may not be substitutable,
> or if it is, the probability of successful substitution would be
> directly proportional to the isomorphism of the biology. If we knew
> of a particular computation which did cause life and consciousness to
> arise in inanimate objects, then that would be convincing, but thus
> far, we have not seen any suggestion of a computer program plotting
> against it's programmer or express an unwillingness to be halted.

Life is explained by the second recursion theorem of Kleene. See my
paper on amoeba and planaria.
Consciousness is 99% explained by the fact that machine can prove the
second recursion theorem of Kleene, or by Gödel's diagonalization
lemma. Then we can fully meta-explain why 1% of consciousness is not
explainable by any entity (except perhaps one, which in this case must
remain silent).

The theorem of Kleene assures that for any computable transformation
T, you can find a program applying T on its own 3-I (that is, its body
for any chosen level of description). You get the simple self-
reproductive amoeba by using T = identity.

Programs does not plot against us, because they are too young, and we
control them still rather well.

>
>>
>>> Your theory would conclude that we should see
>>> naturally evolved brains made out of a variety of materials not
>>> based
>>> on living cells if we look long enough. I don't think that is
>>> necessarily the case.
>>
>> The theory says that it is *possibly* the case, and the advent of
>> computers show it to be the case right now. The difference between
>> artificial and natural is ... artificial.
>>
>
> But why, if biology has nothing to do with life, and neurology has
> nothing to do with consciousness,

Biology is the study of life. I guess you mean that I meant that the
fundamental principle of biology have nothing to do with carbon, but I
am not sure about this. It might be that the structure of carbon and
its role in the origin of life might be deduced from arithmetic. And
this does no mean, that once life as appeared, with carbon, it might
be abandoned later.

> do we find no non-biological entity
> having evolved to live or demonstrate human consciousness.

All life forms today depend completely of oxygen, and plants. That was
not true at the beginning. Life tend to proliferate and can easily
adapt the planet conditions preventing too much different life forms
to develop. many different explanations are possible.
Also, you beg the question. Today's machines are evolving more quickly
than any life form. And I have already argue that todays' LUMs are as
conscious as you, me and jumping spider.


> Doesn't
> that seem unlikely to you. I understand your point that comp promises
> to deliver computers which could be considered as conscious as we are,
> but I think that's only because science is hopelessly confused about
> what consciousness is.
>
> I agree that there is no literal difference between natural and
> artificial, but it's still a glaring deficiency of comp in my mind
> that in the history of the Earth there just so happens to not be any
> non-organic life at all.

The quantum theory of atoms explains well why carbon, hydrogen,
oxygen, nitrogen, have some special role.


> Especially if computers, as you seem to
> suggest, can adopt consciousness just by functioning in the same
> manner as something conscious, then it seems by now there would be
> some cave somewhere where the limestone had learned to dance like a
> beetle or bloom like a flower.

Chemical life has many feature making it rather rare. Very natural,
but very rare. Now, once it happens, it can take much different forms.


>
>>
>>
>>>>>>>>>> This is the kind of strong metaphysical and aristotleian
>>>>>>>>>> assumption
>>>>>>>>>> which I am not sure to see the need for, beyond extrapolating
>>>>>>>>>> from
>>>>>>>>>> our
>>>>>>>>>> direct experience.
>>
>>>>>>>>> Is it better to extrapolate only from indirect experience?
>>
>>>>>>>> It is better to derive from clear assumptions.
>>
>>>>>>> Clear assumptions can be the most misleading kind.
>>
>>>>>> But that is the goal. Celar assumption leads to clear misleading,
>>>>>> which can then be corrected with respect to facts, or repeatable
>>>>>> experiments.
>>>>>> Unclear assumptions lead to arbitrariness, racism, etc.
>>
>>>>> To me the goal is to reveal the truth,
>>
>>>> That is a personal goal. I don't think that truth can be revealed,
>>>> only questioned.
>>
>>> How can you question it if it is not revealed?
>>
>> It can be suggested, like in dreams.
>
> So it is better to extrapolate from what our dreams suggest than the
> 'unclear assumptions' of our ordinary, direct, shared, conscious
> experience?

Not at all. But we can deduce facts from the fact that we are dreaming.
And we don't have any *direct* shared conscious experience. That's a
theorem in comp (and an evidence for grow up adult, I thought). There
is always some amount of indirection.

>
>>
>>
>>
>>>>> regardless of the nature of the
>>>>> assumptions which are required to get there. If you a priori
>>>>> prejudice
>>>>> the cosmos against figurative, multivalent phenomenology then you
>>>>> just
>>>>> confirm your own bias.
>>
>>>> I don't hide this, and it is part of the scientific (modest)
>>>> method. I
>>>> assume comp, and I derive consequences in that frame. Everyone is
>>>> free
>>>> to use this for or against some world view.
>>
>>> It's a good method for so many things, but not everything, and I'm
>>> only interested in solving everything.
>>
>> You might end up with a theory of everything that you will not been
>> able to communicate. You might have fans and disciples (and even
>> money) but not students and researchers correcting and extending your
>> work.
>>
>
> I can't do anything about that. If the world is not interested in the
> truth, then I can't change it.

Truth or hypothesis?
Do you want to do science (hypothesis), or not?

I take for granted elementary school boy arithmetic, and then I can
take the time to explain how a computer work.

> especially based on anti-physicalist conceptions of simulation as the
> only reality.

Quite the contrary. That is the result, derived from elementary
arithmetic, and the hypothesis that biology relies on deterministic
laws. I am not pretending it is obvious (other people have pretended
that for years, and it took me time to understand that it is not *so*
simple after all.

That is nicely said, but I don't buy it. You assume what my intuition
asks me to explain.

To say that arithmetic is too narrow is a symptom that you don't know
what you are talking about.
Analysis, physics, are deluding narrowings of arithmetic.
When I want to be provocative, I say that the physical universe is a
failed attempt made by God to understand the numbers.

>
>>
>>
>>
>>>>>> ?
>>>>>> (I let you know that one of my main motivation consists in
>>>>>> explaining
>>>>>> the physical, that is explaining it without using physical
>>>>>> notions
>>>>>> and
>>>>>> assumptions. The same for consciousness).
>>
>>>>> But what you are explaining it with is no more explainable than
>>>>> physical notions or assumptions. Why explain what is real in terms
>>>>> which are not real?
>>
>>>> You are just begging the question. You talk like if you knew what
>>>> is
>>>> real or not.
>>
>>> I know that consciousness is real,
>>
>> Good. My oldest opponents were disagreeing on this point (a critics
>> which does not make much sense).
>
> Heh, yeah, I can maybe see quibbling with the wording of the cogito,
> but the spirit of it seems silly to deny.

The cogito is important, and I agree with Slezak, that the Gödelian
sentences illustrates the (machine's) cogito:

SLEZAK P., 1983, Descartes 's Diagonal Deduction, Brit. J. Phil. Sci.
34, pp. 13-36.


>
>>
>>> and my consciousness through my
>>> body tells me that matter is real.
>>
>> Matter is real. I do agree with this. But matter, assuming comp, is
>> not something made of elementary material things. Matter, to be
>> short,
>> is the border of the universal mind, as seen by the universal mind.
>> It
>> is a real perception of something which is not primarily material,
>> but
>> sum up infinities of computations. An instructive image, is the
>> border
>> of the Mandelbrot set.
>
> I do understand what you mean, and I almost agree, again, except that
> the Mandelbrot set is too literal.

I have conjectured that the rational Mandelbrot set is a creative set
in the sense of Post, itself equivalent with the notion of universal
machine. In that case the Mandelbrot set is a compactification of a
universal dovetailer. It is picture of "home".
In that case the analogy is literal.


> It doesn't look like a mind, it
> looks like a leaf or a feather.

Some parts looks like a projection of a cut of the four dimensional
tree of life.

> Obsessive, repetitive self-
> similarity... definitely part of it, but you need the orientation of
> naive sensation and motive to make sense of it. It's the elephant in
> the Mandelbrot.

We are back on arithmetical realism. The structure of the M set is
independent of us. It occurs naturally in almost all iteration of
functions on complex numbers.

>
>>
>>> My consciousness also tells me that
>>> some of it's own contents do not matter and it's perceptions do not
>>> faithfully render what is real outside of my awareness. I would say
>>> that arithmetic truths matter but they are not real, and therefore
>>> cannot be manifested in a vacuum - only through some material object
>>> which can accomodate the corresponding sensorimotive experiences.
>>> You
>>> can't write a program that runs on a computer made of only liquid of
>>> vapor - you need solid structures to accomodate fixed arithmetic
>>> truths. You need the right kinds of matter to express arithmetic
>>> truths, but matter does not need arithmetic to experience it's own
>>> being.
>>
>> Not necessarily. You have to give an argument, and there are many
>> results which can explain to you how such argument have to be very
>> sophisticated. Apparently, in arithmetic, numbers does dream
>> coherently (in a first person sharable way) of a stable quantum
>> reality, with some symmetries at the bottom, and wavy like
>> interferences.
>>
>
> I think what you are saying is that matter can arise from arithmetic,
> which is possible, but I don't see the difference. Why is arithmetic
> easier to explain than matter?

Only the beginning of arithmetic is simpler to explain than matter.
To explain why photon have mass, you need the non trivial arithmetical
facts that the sum of all natural numbers can give -1/12.
Matter is hard to explain, even without addressing the hard problem of
matter.

> I think that my hypothesis rooted in
> 'sense' (as the relation between matter-space-entropy and energy-time-
> significance)

I see words here, and no explanation.


> is an audaciously Promethean notion which grounds our
> perception in a cosmos which is both authentic and participatory, as
> well as transcendent and forgiving. From comp I get nothing surprising
> beyond the initial appreciation of the depth of possibilities of
> arithmetic, which although impressive, strike me as being merely awe
> inspiring with no hint at the gravity of the experience of organic
> life.

Why not? The beginning of arithmetic is simple, but when you grasp
that arithmetic is full of life, and that the arithmetical platonia is
infinitely messy, you might as well understand that comp might be
possible, and this without abandoning any of your intuition, except
your quite frightening intuition that my sun in law cannot appreciate
a good steak.

>
>>
>>
>>>> Now it is the fact that all scientist agree with simple facts like
>>>> 1+9=10, etc. Actually they are using such facts already in their
>>>> theories. I just show that IF we are machine, THEN those elementary
>>>> facts are enough to explain the less elementary one.
>>
>>> But since we aren't only a machine, then it's a dead end.
>>
>> You should say: "but since in my theory I am assuming that we are
>> not
>> machine, it is a dead end in my theory".
>
> Yes. Not trying to be rude, I just assume that everything I say is
> automatically within the disclaimer of 'in my view'.

Then you have to repeat it, and avoid the "truth" label.


>>
>>> It's
>>> circular reasoning because you can say we can't prove we're not
>>> machines,
>>
>> I say the exact opposite. We can prove that we are not machine (in
>> case we are not machine). If we are (consistent) machine, then we
>> cannot prove it.
>
> So how do we prove that we are not machine?

For example by showing that comp entails that the mass of an electron
is, after all renormlization are completed, bigger than one ton, to
give an example.

> Why can't we be both
> machine and not machine?

Comp gives sense that the 3-I is a machine, and the 1-I is not. But if
you mean literally that we are machine and non machine, then this is
just a contradictory statement.

>
>>
>>> but the whole idea of 'proving' is mechanical so you are
>>> just magnifying the implicit prejudice and getting further from the
>>> non-mechanistic truths of awareness.
>>
>> The human activity of proving is not mechanical(*), but a gentle
>> polite proof should be mechanically checkable. You can't say to the
>> peer reviewers that for the proposition 13 they have to pray God or
>> smoke salvia divinorum. (Or you say it only at the pause cafe, and
>> this is for private concerns, not for the publication, unless it is
>> paper on salvia or God, but then the goal is no more to prove but to
>> suggest a possible empirical discovery).
>>
>> (*) assuming P ≠ NP.
>
> If peer reviewers demand that a theory which explains subjectivity not
> examine subjectivity directly, then they have a priori excluded any
> possibility of understanding subjectivity. The peer reviewers are the
> problem, not the theory.

Sure. But they do not demand that subjectivity is not examined
directly, they demand that *arguments* are not based on non
communicable statements, beyond the axioms.


>
>
>>
>>
>>
>>>>>>>>> The link between the
>>>>>>>>> sensorimotive and electromagnetic is the invariance between
>>>>>>>>> the
>>>>>>>>> two.
>>
>>>>>>>> ?
>>>>>>> Feelings and action potentials have some phenomenological
>>>>>>> overlap.
>>
>>>>>> What is feeling, what is action, what is potential?
>>
>>>>> To ask what feeling is can only be sophistry.
>>
>>>> Not when addressing issues in fundamental cognitive science.
>>>> Niether
>>>> matter nor consciousness should be taken as simple elementary
>>>> notions.
>>
>>> But numbers should be taken as elementary notions?
>>
>> In the usual mathematical sense. No need of extra metaphysical
>> assumption. You just need to believe sentences like "prime numbers
>> exists".
>
> They exist in the context of a particular sensorimotive logos, not in
> any independent sense. Something like the visible spectrum is a much
> stronger primitive as it appears to us unbidden and unexplained as a
> shared experience without having to be learned or understood.

But if this is an argument, then you take what we search to explain as
the direct assumption. you could as well say that we should cultivate
our gardens instead of doing fundamental research.

>
>> All the material science use this. Despite the claims of some
>> philosophers, we just cannot do science without assuming the
>> independence of the truth of elementary (first order) arithmetical
>> relations.
>
> They can have truth or refer to truth without themselves being
> phenomena which exist independently.

Not assuming comp. That is the whole point of UDA.

> They aren't a they even, it's
> just an ephemeral collection of human ideas about quantitative
> universality. I don't see that they describe quality or techne at all.

They provides the best we can hope for in case we do survive with
digital brain.

>
>>
>>> That's the problem,
>>> you are trying to explain awareness as an epiphenomenon
>>
>> Awareness is not an epiphenomenon at all. It is a real non illusional
>> epistemological phenomenon which is responsible (in some logico-
>> arithmetical sense) the rise of physical reality.
>
> If it's not an epiphenomenon, then are you saying it is not a
> consequence of arithmetic?

Why? I believe free will makes sense in arithmetic. Being
epistemological does not mean being epiphenomenal.
The habit of putting in the trash the higher levels is an aristotelian
habit, founded on an aristotelian dogmatic misconception of mind and
matter. I think.

>
>>
>> It is: NUMBERS ==> CONSCIOUSNESS/DREAMS ==> SHARABLE DREAMS (physical
>> realities).
>
> Isn't that saying consciousness is an epiphenomenon of numbers?

I would say it is a fundamental phenomenological reality.

> What
> are numbers without consciousness?

They are like numbers without prime numbers. Just nonsense.


>
>>
>>> of cognitive
>>> science, when of course cognition arises from feeling (otherwise
>>> babies would come out of the womb solving math equations instead of
>>> crying, and civilizations should evolve binary codes before
>>> ideographic alphabets and cave paintings).
>>
>> I agree that cognition arise from feelings.
>>
>
> Cool

OK. It is a key point. feelings precedes (even in logic and
arithmetic, but also in physical time) thought, and thought precedes
languages, and languages precedes computers, etc. But all that
precedes matter, in the logico-arithmetical reality.


>
>>
>>
>>>>> It is a primitive of
>>>>> human subjectivity, and possibly universal subjectivity. To
>>>>> experience
>>>>> directly, qualitatively, significantly. An action potential is an
>>>>> electromagnetic spike train among neurons. They can be
>>>>> correlated to
>>>>> instantiation of feelings.
>>
>>>> I agree with all this, but that has to be explained, not as taken
>>>> for
>>>> granted.
>>
>>> How can any primitive be explained?
>>
>> It can't, by definition. That is why I don't take matter and
>> consciousness as primitive, given that we can explain them from
>> numbers (and their laws). The contrary is false. We cannot explain
>> numbers by matter or consciousness.
>
> I think that we can explain numbers from consciousness. They are
> sensorimotive teleological gestures refined and polished into an
> instrumental literalism which closely approximates a particular band
> of literal sense that we share with many physical, chemical, and
> primitive biological phenomena. They do not extend beyond a
> superficial treatment of experiences like pain, pleasure, sensation,
> humor, poetry, music, etc.

You are confusing the numbers studied by arithmeticians, and the human
numbers, which can be studied by psychologists, sociologists, etc.

>
>> It can be proved that numbers
>> cannot be explained at all. In that sense, they are provably
>> necessarily primitive.
>
> No more so than colors or words, thoughts, feelings, being, etc.

Why? Comp does provide an explanation of many feelings attributes,
except for a (tiny) part, but then comp meta-explained completely why
we cannot explained them completely.

Why? Nothing third person describable can be "the same thing" as a
lived experience.


> they have the
> same rhythmic patterns, instantiation, and duration. The content,
> however is precisely the opposite: The MRI patterns are topological
> regions of activity in a 3D space, without any particular meaning or
> significance, but with great specificity in terms of precise location
> and public verifiability. The subjective experience is literally the
> opposite. Not topological in space but perceptual in time.

OK. (actually honesty forces me to say that although I was pretty sure
that subjectivity always involves time, I am less sure since I have
done some experiences with Salvia divinorum. The plant has raised some
doubt on this consciousness/time relation).

> If you
> shorten the interval too much, you lose the sense of the perception
> entirely, but the electromagnetic pattern does not vanish. The
> subjective experience has significance and meaning. Without the
> experience side of it, the neural correlate would be no more
> interesting than examining sand dunes. Without taking significance
> into account, there is no purpose to examine the MRI in the first
> place.

OK.


>
>>
>>
>>> Electromagnetism is the name we give to the various phenomena of
>>> matter across space - waving, attracting, repulsing, moving,
>>> intensifying, discharging, radiating, accumulating density,
>>> surfaces,
>>> depth, consistency, etc. Sensorimotivation is the name I'm giving to
>>> the various phenomena of experience (energy) through time -
>>> detecting,
>>> sensing, feeling, being, doing, intention, image, emotion, thought,
>>> meaning, symbol, archetype, metaphor, semiotics, communication,
>>> arithmetic, etc.
>>
>> That's what the numbers can explain, and that what cannot explain the
>> numbers (without assuming them implicitly).
>
> I think that numbers can't explain any of that without the a priori
> expectation of those experiences. Numbers by themselves do not suggest
> anything but more numbers.

Not at all. You have to learn to listen to them. They look a bit
alien, and we can take time to infer the life within, but this is our
*human* current lack of imagination, to talk like John Mikes.


> They have no capacity to recognize their
> own patterns,

Of course they can. Well, of course if you study a bit of computer
science. It is really due to the fact that numbers can recognize their
own patterns, and change them accordingly, that they brought an non
computable amount of mess in Platonia.

> only to be recognized by the computational shadows cast
> by concretely embodied agents of sense and motive.

I don't really believe in concretely embodied agent. (even without
comp). I have never seen that. It is an Aristotelian myth.
I agree that it might look like that exists, but that is an
extrapolation brought by billions years of life struggle, in a very
deep ocean of computations, seen from inside.

Yes. Consciousness is part of arithmetical truth, and it is non
communicable as such by any arithmetical entity.


> or the arithmetic
> itself, the content, while electromagnetism contains only the
> computational consequences of the arithmetic. Yeah, if that's what you
> are saying I like it. It gives me something new.

Cool.


> I don't think it
> captures the significance of what the presence of p does as far as
> making sensorimotive analog through time and electromagnetic being
> discrete across space.

Er... this points on open problem. Comp does not yet decide between a
continuous space-time or a discrete one. But comp predict that some
physical things have to be continuous, but it might still be only the
probabilities. I dunno.


>
>>
>>>> Hmm... The difference between subjective and sensorimotive would be
>>>> captured by the difference between Bp & p, and Bp & Dt & p. That
>>>> confirms my feeling described above.
>>
>>> I'll get back to you if you can explain the variables better. I
>>> tried
>>> Googling them but nothing clear comes up for me.
>>
>> I hope that what I wrote above helps a bit. There are good book on
>> the
>> subject, but you need to follow some course in mathematical logic, to
>> get familiar with it.
>
> I think that there is a cost associated with relying exclusively on
> mathematical logic in a TOE though. My hypothesis shows how modal
> agreements magnify the in-language and attenuate the outward
> sensitivity. Like a gaggle of teenagers hanging around in a pack,
> talking to each other incessantly and oblivious to the world.

Hmm...

This does not make much sense to me ...

> so that the exact circumstance of someone's
> birth - the thoughts and feelings of the doctor and nurse, the sound
> of the cars outside, the proximity to the vineyards and the
> ocean...all of that may need to be reproduced to instantiate a
> particular identity.

... but I agree with this, although the subjective memory of all this
might be quite enough, most probably. But even if the ocean is needed,
it would only make the comp subst level lower.


>
>>
>>
>>
>>>> We would also be led to the peculiar situation
>>>> where machine could correctly prove that they are not machine,
>>
>>> I don't see how matter as a primitive makes machines able to prove
>>> that they are not machines.
>>
>> I was unclear. What I say is that if a machine convince herself,
>> with
>> your help perhaps, that some pirimitive matter exists and has a role
>> for the instantiation of her consciousness, then such a machine will
>> eventually conclude (by a way similar to UDA) that she is not a
>> machine. If such machine is ideally correct, she would conclude
>> correctly that she is not a machine. This comes from the fact that
>> the
>> UDA reasonning can be done by machines (as AUDA illustrated in some
>> admittedly abstract way). You might intuit this if you take the time
>> to follow the UD argument.
>>
>
> Hmm. Not sure I get it. I sort of get that the mathematical
> proposition of a matter-like topology would give rise to some novelty
> through computational non-accessibility but I don't know that the
> novelty would necessarily seem non-mechanical.

If it is mechanical, it is Turing emulable.
If it is Turing emulable, it is "already" emulated infinitely often in
arithmetical truth (even in a tiny part of arithmetical truth).

>
>>> I think a machine machine (or something we
>>> presume is a machine) proves whether of not it is a machine by how
>>> it
>>> responds to errors or hardware failures.
>>
>> A machine can never prove, still less know, that she is a machine.
>> Even machine have to make a leap of faith to admit mechanism. Most
>> machines will be 'naturally' against comp, before introspecting
>> deeper, and reasoning deeper, so that they can infer the possibility
>> (but nothing more).
>
> I'm not against the reasoning of that, I just don't think it's a
> compelling basis for rich perception. Sure, everyone's reality tunnel
> looks like reality and not a tunnel, but that doesn't explain why the
> contents of the tunnel are so interesting and so real.

Because arithmetical truth has this basic property: the more you know
about it, the less you know about it. Imagine a very dark infinite
place (machine ignorance): at first you see about nothing, so you can
still believe the place is not so big. but the more you put light on
it, the more you grasp how big the place is.


>
>>
>>> You could maybe say that what
>>> we are made of is an accumulation of the universe's favorite errors,
>>> failures, and aberrations.
>>
>> Partially, yes. Even partial lies. Perhaps. I'm not sure.
>
> Sure, yes. Partial lies are probably the only way to be certain of
> keeping truth alive. Indra's Net of Bullshit, haha. Seriously though,
> you need the alchemical base alloys to hide the precious metal within,
> otherwise it wouldn't be precious.

Cool image.


>
>>
>>
>>
>>>> making
>>>> all possible discourses of machine being of the type Bf. You might
>>>> eventually change my mind on the non provability of comp (as
>>>> opposed
>>>> to the non recognizability of the our level of comp). For this you
>>>> should convince the machine that material is necessarily
>>>> primitive. I
>>>> begin to doubt that non-comp can make any sense. Hmm...
>>
>>> If I pull the plug on the machine, then the machine halts. Why
>>> should
>>> that be the case were machine independent of material substrate?
>>
>> Because machines can have long and complex computational histories.
>> If you pull the plug on the machine, you act on her 3-body that she
>> share the existence with you, and so in the normal histories she will
>> disfunction with a probability very near 1. From the points of view
>> of
>> the machine she will survive in the computation which are the closer
>> with those normal computations (that's explains the comp-immortality,
>> which can already be explained in the inferred QM of nature).
>
> So a computer keeps computing even when you turn it off? That would be
> hard to swallow if you are saying that.

If your computer run a complex computation, making it possible for a
consciousness to manifest itself relatively to you, and if you pull
the plug on your computer, then relatively to you, that consciousness
will no more be able to manifest itself. But from the view of that
consciousness itself, it will continue to be manifested on
sufficiently similar computations, run by sufficiently similar
computer, somewhere in UD*.

If you want to stay in relation with that consciousness, you will have
to pull the plug on yourself simultaneously.

But this is true *only* in principle. To do this in practice, you have
to assure that you unplug your self at the right level. If not, you
might just end up in a universe, where there is no plug, or something.
We cannot know our level, so to "unplug oneself" cannot really be
defined at all.

But yes, comp implies immortality. Actually it implies many form of
immortality.

It is the big difference between Aristotle an Plato. With the first we
are mortal souls, imprisoned in a primary material universe. With
Plato/Plotinus, we are immortal soul imprisoned in the consciousness
prison (Rossler's image).

On the point of immortality, note that the christians depart from the
atheists, in keeping up Plato's immortality. Comp is a little more
christian than atheistic. Of course, comp departs from both christians
and atheists on the aristotelian *primary* matter idea.

Bruno


http://iridia.ulb.ac.be/~marchal/

Craig Weinberg

unread,
Oct 5, 2011, 11:44:32 AM10/5/11
to Everything List
On Oct 5, 10:15 am, Quentin Anciaux <allco...@gmail.com> wrote:

> No they are not saying that. They are saying that a model of the brain fed
> with the same inputs as a real brain will act as the real brain... if it was
> not the case, the model would be wrong so you could not label it as a model
> of the brain.

That would require that the model of the brain be closer than
genetically identical, since identical twins and conjoined twins do
not always respond the same way to the same inputs. That may not be
possible, since the epigenetic variation and developmental influences
may not be knowable or reproducible. It's a 'Boys From Brazil' theory.
Cool sci-fi, but I don't think we will ever have to worry about
considering it as a real possibility. We know nothing about what the
substitution level of the 'same inputs' would be either. Can you say
that making a brain of a 10 year old would not require 10 years of
sequential neural imprinting or that the imprinting would be any less
complex to develop than it would be to create than the world itself?

>
> They never said they could know which inputs you could have and they don't
> have to. They just have to know the transition rule (biochemichal/physical)
> of each neurons and as the brain respect physics so as the model, and so it
> will react the same way.

Reacting is not experiencing though. A picture of a brain can react
like a brain, but it doesn't mean there is an experiential correlate
there. Just because the picture is 3D and has some computation behind
it instead of just a recording, why would that make it suddenly have
an experience?

>
> You do the same mistake with your tv pixel analogy. If I know all the
> transition rule of *a pixel* according to input... I can build a model of a
> TV that will *exactly* display the same thing as the real TV for the same
> inputs without knowing anything about movies/show/whatever... I don't care
> about movies at that level. They never said that they would explain/predict
> the input to the tv, just replicate the tv.

You have to care about the movies at that level because that's what
consciousness is in the metaphor. If you don't have an experience of
watching a movie, then you just have an a-signifying non-pattern of
unrelated pixels. You need a perceiver, and audience to turn the image
into something that makes sense. It's like saying that you could write
a piece of software that could be used as a replacement for a monitor.
It doesn't matter if you have a video card in the computer and drivers
to run it, without the actual hardware screen plugged into it there is
no way for us to see it. A computer does not come with it's own screen
built into the interior of it's microprocessors - but we do have the
equivalent of that. Our experience cannot be seen from our neurology,
you have to already know it's there. Building a model based only on
neurology doesn't mean that experience comes with it any more than a
video driver means you don't need a monitor.

Craig

Quentin Anciaux

unread,
Oct 5, 2011, 11:54:36 AM10/5/11
to everyth...@googlegroups.com


2011/10/5 Craig Weinberg <whats...@gmail.com>

On Oct 5, 10:15 am, Quentin Anciaux <allco...@gmail.com> wrote:

> No they are not saying that. They are saying that a model of the brain fed
> with the same inputs as a real brain will act as the real brain... if it was
> not the case, the model would be wrong so you could not label it as a model
> of the brain.

That would require that the model of the brain be closer than
genetically identical, since identical twins and conjoined twins do
not always respond the same way to the same inputs.

They aren't in the same state.
 
That may not be
possible, since the epigenetic variation and developmental influences
may not be knowable or reproducible. It's a 'Boys From Brazil' theory.
Cool sci-fi, but I don't think we will ever have to worry about
considering it as a real possibility. We know nothing about what the
substitution level of the 'same inputs' would be either. Can you say
that making a brain of a 10 year old would not require 10 years of
sequential neural imprinting or that the imprinting would be any less
complex to develop than it would be to create than the world itself?

>
> They never said they could know which inputs you could have and they don't
> have to. They just have to know the transition rule (biochemichal/physical)
> of each neurons and as the brain respect physics so as the model, and so it
> will react the same way.

Reacting is not experiencing though. A picture of a brain can react
like a brain, but it doesn't mean there is an experiential correlate
there. Just because the picture is 3D and has some computation behind
it instead of just a recording, why would that make it suddenly have
an experience?

Because if you ask it something (feed input) you'll get an answer which would be the same as a real person... you can't ask anything to a recording.
 

>
> You do the same mistake with your tv pixel analogy. If I know all the
> transition rule of *a pixel* according to input... I can build a model of a
> TV that will *exactly* display the same thing as the real TV for the same
> inputs without knowing anything about movies/show/whatever... I don't care
> about movies at that level. They never said that they would explain/predict
> the input to the tv, just replicate the tv.

You have to care about the movies at that level because that's what
consciousness is in the metaphor. If you don't have an experience of
watching a movie, then you just have an a-signifying non-pattern of
unrelated pixels. You need a perceiver, and audience to turn the image
into something that makes sense. It's like saying that you could write
a piece of software that could be used as a replacement for a monitor.
It doesn't matter if you have a video card in the computer and drivers
to run it, without the actual hardware screen plugged into it there is
no way for us to see it. A computer does not come with it's own screen
built into the interior of it's microprocessors

But a human does... what a magical feature don't you think ?
 
- but we do have the
equivalent of that. Our experience cannot be seen from our neurology,
you have to already know it's there. Building a model based only on
neurology doesn't mean that experience comes with it any more than a
video driver means you don't need a monitor.

Craig

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Craig Weinberg

unread,
Oct 5, 2011, 1:09:13 PM10/5/11
to Everything List
On Oct 5, 11:54 am, Quentin Anciaux <allco...@gmail.com> wrote:
> 2011/10/5 Craig Weinberg <whatsons...@gmail.com>
>
> > On Oct 5, 10:15 am, Quentin Anciaux <allco...@gmail.com> wrote:
>
> > > No they are not saying that. They are saying that a model of the brain
> > fed
> > > with the same inputs as a real brain will act as the real brain... if it
> > was
> > > not the case, the model would be wrong so you could not label it as a
> > model
> > > of the brain.
>
> > That would require that the model of the brain be closer than
> > genetically identical, since identical twins and conjoined twins do
> > not always respond the same way to the same inputs.
>
> They aren't in the same state.

That's what I'm saying. If copies at the genetic level do not produce
the same states, then what suggests to us that anything could produce
the same state?

>
>
>
>
>
>
>
>
>
> > That may not be
> > possible, since the epigenetic variation and developmental influences
> > may not be knowable or reproducible. It's a 'Boys From Brazil' theory.
> > Cool sci-fi, but I don't think we will ever have to worry about
> > considering it as a real possibility. We know nothing about what the
> > substitution level of the 'same inputs' would be either. Can you say
> > that making a brain of a 10 year old would not require 10 years of
> > sequential neural imprinting or that the imprinting would be any less
> > complex to develop than it would be to create than the world itself?
>
> > > They never said they could know which inputs you could have and they
> > don't
> > > have to. They just have to know the transition rule
> > (biochemichal/physical)
> > > of each neurons and as the brain respect physics so as the model, and so
> > it
> > > will react the same way.
>
> > Reacting is not experiencing though. A picture of a brain can react
> > like a brain, but it doesn't mean there is an experiential correlate
> > there. Just because the picture is 3D and has some computation behind
> > it instead of just a recording, why would that make it suddenly have
> > an experience?
>
> Because if you ask it something (feed input) you'll get an answer which
> would be the same as a real person... you can't ask anything to a recording.

But you can ask something to a recording. (Please stay on the line,
your call is important to us... For technical support please say the
name of the product or press one...)

If I ask a ventriloquist dummy a question I will get an answer that
would be the same as a real person too. The computation is nothing but
recordings strung together with a lot of IF > THEN logic to
synchronize the output with the input. It's correlation, not
causation. The computations aren't understanding any questions or
answers, they are just matching pre-selected criteria against an a-
signifying database. You can't mistake a player piano for a human
pianist just because the end result is the same notes.

>
>
> > > You do the same mistake with your tv pixel analogy. If I know all the
> > > transition rule of *a pixel* according to input... I can build a model of
> > a
> > > TV that will *exactly* display the same thing as the real TV for the same
> > > inputs without knowing anything about movies/show/whatever... I don't
> > care
> > > about movies at that level. They never said that they would
> > explain/predict
> > > the input to the tv, just replicate the tv.
>
> > You have to care about the movies at that level because that's what
> > consciousness is in the metaphor. If you don't have an experience of
> > watching a movie, then you just have an a-signifying non-pattern of
> > unrelated pixels. You need a perceiver, and audience to turn the image
> > into something that makes sense. It's like saying that you could write
> > a piece of software that could be used as a replacement for a monitor.
> > It doesn't matter if you have a video card in the computer and drivers
> > to run it, without the actual hardware screen plugged into it there is
> > no way for us to see it. A computer does not come with it's own screen
> > built into the interior of it's microprocessors
>
> But a human does... what a magical feature don't you think ?

It's a helluva feature, definitely. I don't think it has to be magical
personally, but it definitely makes us different than a machine based
solely on physical function.

Craig

Stathis Papaioannou

unread,
Oct 5, 2011, 6:40:51 PM10/5/11
to everyth...@googlegroups.com
On Wed, Oct 5, 2011 at 2:24 PM, Craig Weinberg <whats...@gmail.com> wrote:

>> In fact, Craig himself
>> denies that his theory would manifest as violation of physical law,
>> and is therefore inconsistent.
>
> There is no inconsistency. You're just not understanding what I'm
> saying because you are only willing to think in terms of reactive
> strategies for neutralizing the threat to your common sense (which is
> a cumulative entanglement of autobiographical experiences and
> understandings, interpretations of cultural traditions and
> perspectives, etc).

If you are right then there would be a violation of physical law in
the brain. You have said as much, then denied it. You have said that
neurons firing in the brain can't be just due to a chain of
biochemical events. That would mean that, somewhere, a neuron fires
where examination of its physical state would suggest that it should
not fire. You can't have it both ways: EITHER the neurons all fire due
to detectable physical causes OR some neurons do not fire due to
detectable physical causes.


--
Stathis Papaioannou

Stathis Papaioannou

unread,
Oct 5, 2011, 7:10:37 PM10/5/11
to everyth...@googlegroups.com
On Thu, Oct 6, 2011 at 2:44 AM, Craig Weinberg <whats...@gmail.com> wrote:
> On Oct 5, 10:15 am, Quentin Anciaux <allco...@gmail.com> wrote:
>
>> No they are not saying that. They are saying that a model of the brain fed
>> with the same inputs as a real brain will act as the real brain... if it was
>> not the case, the model would be wrong so you could not label it as a model
>> of the brain.
>
> That would require that the model of the brain be closer than
> genetically identical, since identical twins and conjoined twins do
> not always respond the same way to the same inputs. That may not be
> possible, since the epigenetic variation and developmental influences
> may not be knowable or reproducible. It's a 'Boys From Brazil' theory.
> Cool sci-fi, but I don't think we will ever have to worry about
> considering it as a real possibility. We know nothing about what the
> substitution level of the 'same inputs' would be either. Can you say
> that making a brain of a 10 year old would not require 10 years of
> sequential neural imprinting or that the imprinting would be any less
> complex to develop than it would be to create than the world itself?

Firstly, it is theoretically possible to model the brain arbitrarily
closely, even if technically difficult. Secondly, it is enough for the
purposes of the discussion to model a generic brain, not a particular
brain.

>> They never said they could know which inputs you could have and they don't
>> have to. They just have to know the transition rule (biochemichal/physical)
>> of each neurons and as the brain respect physics so as the model, and so it
>> will react the same way.
>
> Reacting is not experiencing though. A picture of a brain can react
> like a brain, but it doesn't mean there is an experiential correlate
> there. Just because the picture is 3D and has some computation behind
> it instead of just a recording, why would that make it suddenly have
> an experience?

In the first instance, yes, you might not be sure iif the artificial
brain is a zombie. But the fading qualia thought experiments shows
that if it is a zombie it would allow you to make absurd creatures,
partial zombies (defined as someone who lacks a particular conscious
modality but behaves normally and doesn't realise anything is wrong).
The only way to avoid the partial zombies is if the brain model
replicates consciousness along with function.

>> You do the same mistake with your tv pixel analogy. If I know all the
>> transition rule of *a pixel* according to input... I can build a model of a
>> TV that will *exactly* display the same thing as the real TV for the same
>> inputs without knowing anything about movies/show/whatever... I don't care
>> about movies at that level. They never said that they would explain/predict
>> the input to the tv, just replicate the tv.
>
> You have to care about the movies at that level because that's what
> consciousness is in the metaphor. If you don't have an experience of
> watching a movie, then you just have an a-signifying non-pattern of
> unrelated pixels. You need a perceiver, and audience to turn the image
> into something that makes sense. It's like saying that you could write
> a piece of software that could be used as a replacement for a monitor.
> It doesn't matter if you have a video card in the computer and drivers
> to run it, without the actual hardware screen plugged into it there is
> no way for us to see it. A computer does not come with it's own screen
> built into the interior of it's microprocessors - but we do have the
> equivalent of that. Our experience cannot be seen from our neurology,
> you have to already know it's there. Building a model based only on
> neurology doesn't mean that experience comes with it any more than a
> video driver means you don't need a monitor.

A model of the TV will reproduce the externally observable behaviour
of a TV, given the same inputs. That's what a model is. A model of a
brain would reproduce the externally observable behaviour of a brain.
Whether it would also reproduce the consciousness is a further
question, and the fading qualia thought experiment shows that it
would.


--
Stathis Papaioannou

Craig Weinberg

unread,
Oct 5, 2011, 9:39:25 PM10/5/11
to Everything List
On Oct 5, 6:40 pm, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Wed, Oct 5, 2011 at 2:24 PM, Craig Weinberg <whatsons...@gmail.com> wrote:
> >> In fact, Craig himself
> >> denies that his theory would manifest as violation of physical law,
> >> and is therefore inconsistent.
>
> > There is no inconsistency. You're just not understanding what I'm
> > saying because you are only willing to think in terms of reactive
> > strategies for neutralizing the threat to your common sense (which is
> > a cumulative entanglement of autobiographical experiences and
> > understandings, interpretations of cultural traditions and
> > perspectives, etc).
>
> If you are right then there would be a violation of physical law in
> the brain. You have said as much, then denied it. You have said that
> neurons firing in the brain can't be just due to a chain of
> biochemical events.

They can be due to a chain of biochemical events, but they also *are*
biochemical events, and therefore can influence them intentionally as
well as be influenced by them. I don't understand why this is such a
controversial ideal. Just think of the way that you actually function
right now. Your personal motives driving what *you* do with *your*
mind and *your* body. If the mind could be understood just as
biochemical events among neurons, then we would have no way to think
of our bodies as ours - the brain would not need to think of itself in
any other terms other than the biochemical events that it literally
is. Why make up some bogus GUI if there is no user?

>That would mean that, somewhere, a neuron fires
> where examination of its physical state would suggest that it should
> not fire.

I guess you are never going to get tired of me correcting this
factually incorrect assumption.

The physical state of a neuron only suggests whether it is firing or
not firing at the moment - not the circumstances under which it might
fire. If you examine neurons in someone's amygdala, how is that going
to tell you whether or not they are going to play poker next week or
not? If the neurons feel like firing, does a casino appear?

>You can't have it both ways: EITHER the neurons all fire due
> to detectable physical causes

Thought and intention are detectable causes with effects that are both
describable as physical (having discrete volumes in space, mass,
temperature, etc) and experiential (having cumulative perceptions
through time, qualities, significance, subjective participation).
Neurons associated with our consciousness can be lead by our personal,
high level agency as a human being's psyche, or they can push their
physiological agenda up to the psyche from the low level. There is no
boundary. Just as there is no boundary whether you use a remote
control to change the channel on your TV or your TV makes you change
the channel by showing an ad for something that you would rather
watch. I can keep explaining this over and over if you like, but I
don't know why you want me to.

>OR some neurons do not fire due to
> detectable physical causes.

Why does detectable have to mean physical?

Craig

Craig Weinberg

unread,
Oct 5, 2011, 10:01:50 PM10/5/11
to Everything List


On Oct 5, 7:10 pm, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Thu, Oct 6, 2011 at 2:44 AM, Craig Weinberg <whatsons...@gmail.com> wrote:
> > On Oct 5, 10:15 am, Quentin Anciaux <allco...@gmail.com> wrote:
>
> >> No they are not saying that. They are saying that a model of the brain fed
> >> with the same inputs as a real brain will act as the real brain... if it was
> >> not the case, the model would be wrong so you could not label it as a model
> >> of the brain.
>
> > That would require that the model of the brain be closer than
> > genetically identical, since identical twins and conjoined twins do
> > not always respond the same way to the same inputs. That may not be
> > possible, since the epigenetic variation and developmental influences
> > may not be knowable or reproducible. It's a 'Boys From Brazil' theory.
> > Cool sci-fi, but I don't think we will ever have to worry about
> > considering it as a real possibility. We know nothing about what the
> > substitution level of the 'same inputs' would be either. Can you say
> > that making a brain of a 10 year old would not require 10 years of
> > sequential neural imprinting or that the imprinting would be any less
> > complex to develop than it would be to create than the world itself?
>
> Firstly, it is theoretically possible to model the brain arbitrarily
> closely, even if technically difficult. Secondly, it is enough for the
> purposes of the discussion to model a generic brain, not a particular
> brain.

I disagree on both counts. There is no such thing as a generic brain,
and any theory which assumes that it's possible to model the brain
closely enough to replace does not understand the relation between the
brain and mind. I think that I do understand that relation and my
understanding suggests that every brain is unique to the point that it
may not even be possible to reproduce a single moment of a brain's
function, let alone an ongoing mechanism. It's not clear that the
brain could even be considered the same thing as itself from day to
day. It's more like an electronic cloud of meaty snot that is
continuously changing in novel often utterly idiosyncratic ways.

>
> >> They never said they could know which inputs you could have and they don't
> >> have to. They just have to know the transition rule (biochemichal/physical)
> >> of each neurons and as the brain respect physics so as the model, and so it
> >> will react the same way.
>
> > Reacting is not experiencing though. A picture of a brain can react
> > like a brain, but it doesn't mean there is an experiential correlate
> > there. Just because the picture is 3D and has some computation behind
> > it instead of just a recording, why would that make it suddenly have
> > an experience?
>
> In the first instance, yes, you might not be sure iif the artificial
> brain is a zombie. But the fading qualia thought experiments shows
> that if it is a zombie it would allow you to make absurd creatures,
> partial zombies (defined as someone who lacks a particular conscious
> modality but behaves normally and doesn't realise anything is wrong).

Fading qualia is not a problem. Somnambulism, conversion disorders,
and synesthesia exist already. Blind people store tactile qualia in
their visual cortex. Are blind people absurd creatures because they
see through their fingers?

> The only way to avoid the partial zombies is if the brain model
> replicates consciousness along with function.

That statement is much more absurd than the idea of partial zombies.
It is to say that the only way to avoid computers without screens is
if all computation replicates a monitor with it's function. It's
ridiculous and false if you ask me. Unscientific. Lazy.
A model of the TV will only reproduce the external behavior of a TV if
we already have a TV equivalent to see it on. Can the model reproduce
a TV's observable behavior through a non-visual interface? No. Can it
reproduce it to an audience who cannot see? No. The model is not
capable of reproducing anything by itself, just as a model of a brain
cannot produce consciousness without there being something which is
capable of experiencing it. There is no reason whatsoever to assume
that experience would arise automatically, just as a computer monitor
does not arise from a video driver.

The fading qualia conjecture is bunk. It arises from the mistaken
notion that consciousness is a product of mechanism rather than the
subjective correlate of it.

Craig

Stathis Papaioannou

unread,
Oct 5, 2011, 10:39:55 PM10/5/11
to everyth...@googlegroups.com
On Thu, Oct 6, 2011 at 12:39 PM, Craig Weinberg <whats...@gmail.com> wrote:

>> If you are right then there would be a violation of physical law in
>> the brain. You have said as much, then denied it. You have said that
>> neurons firing in the brain can't be just due to a chain of
>> biochemical events.
>
> They can be due to a chain of biochemical events, but they also *are*
> biochemical events, and therefore can influence them intentionally as
> well as be influenced by them. I don't understand why this is such a
> controversial ideal. Just think of the way that you actually function
> right now. Your personal motives driving what *you* do with *your*
> mind and *your* body. If the mind could be understood just as
> biochemical events among neurons, then we would have no way to think
> of our bodies as ours - the brain would not need to think of itself in
> any other terms other than the biochemical events that it literally
> is. Why make up some bogus GUI if there is no user?

The mind may not be understandable in terms of biochemical events but
the observable behaviour of the brain can be.

>>That would mean that, somewhere, a neuron fires
>> where examination of its physical state would suggest that it should
>> not fire.
>
> I guess you are never going to get tired of me correcting this
> factually incorrect assumption.
>
> The physical state of a neuron only suggests whether it is firing or
> not firing at the moment - not the circumstances under which it might
> fire. If you examine neurons in someone's amygdala, how is that going
> to tell you whether or not they are going to play poker next week or
> not? If the neurons feel like firing, does a casino appear?

Whether a neuron in the amygdala or anywhere else fires depends on its
present state, inputs from the neurons with which it interfaces and
other aspects of its environment including things such as temperature,
pH and ion concentrations. If the person thinks about gambling, that
changes the inputs to the neuron and causes it to fire. It can't fire
without any physical change. It can't fire without any physical
change. It can't fire without any physical change.


--
Stathis Papaioannou

Message has been deleted

Craig Weinberg

unread,
Oct 6, 2011, 9:11:11 AM10/6/11
to Everything List
On Oct 5, 10:39 pm, Stathis Papaioannou <stath...@gmail.com> wrote:
“If the person thinks about gambling, that changes the inputs…”

Start there. “If a person thinks…” means that they are initiating the
physical change with their thought. Their thought is the
electromagnetic change which drives the physical change. The thought
or intention is the signifying sensorimotive view, the electomagnetic
view is a-signifying voltage, charge, detection of ligands, etc. It is
bidirectional so that the reason for firing can be driven by the
biochemistry, or by the content of a person’s mind. This is just
common sense, it’s not disputable without sophistry.

Here’s how I think it might work: You can be excited because you
decide to think about something that excites you, or you can ingest a
stimulant drug and you will become excited in general and that
excitement will lead your mind by the nose to the subjects that most
excite it. They are the same thing but going in opposite directions.

Think of it as induction:

Imagine that this works like an electric rectifier: (http://
electrapk.com/wp-content/uploads/2011/08/half-wave-rectifier-with-
transformer.jpg)

except that instead of electric current generating a magnetic field
through a coil which pushes or pulls the magnetic force within the
other coil - the brain’s electromagnetic field is pushing to and/or
pulling from changes in the sensorimotive experience. The difference
though is that with a rectifier, it is the identical physical ontology
which is mirrored in parallel (electromagnetic :||: magnetic-electric)
whereas in sensorimotive *the ontology is perpendicular* (meaning that
what it actually is can only be *experiences linked together through
time*, not *objects separated across space*), so there are four
primary mirrorings:

electromagnetic :||: sensorimotive (3SI) - brain changes induce
feelings
sensorimotive :||: electromagnetic (1SI) - feelings induce brain
changes
magnetic-electric :||: motive-sensory (3MI) - mechanical actions
induce involuntary reactions
and motive-sensory :||: magnetic-electric (1MI) - voluntary actions
induce mechanical reactions

Note that the motive inductions are about projecting to and from the
brain, body and it’s environments while sensory inductions are about
receiving sense from the experiences which can be consciously decoded
from the environment, body, and mind. Think cell/body+dendrites vs
axons, brain vs spinal cord, head vs tail. Many vs one. Motive
projects intention actively through obstacles and objects like a
magnet pulls iron filings into shapes and magnetizes other iron
objects to make them magnets. Sense interprets and experiences,
detecting though analog and metaphor, reproducing local versions of
remote phenomena.

In the objective sensory induction (3SI) 3-p electromagnetic changes
(cocaine hydrochloride binds at dopamine sites) induce complementary
changes in the corresponding 1-p sensory fields, (the psyche as a
whole is compelled to feel animated and driven).

In the subjective sensory induction (1SI) 1-p sensory events (guy
thinks about skydiving while getting sexual with the girlfriend)
induces change in 3-p electomagnetic fields (testosterone, dopamine,
and epinephrine are released).

Both of these are cases where feelings and physical changes are
produced (either with electromagnetic cocaine binding or sensorimotive
fantasy), but I’m calling it Sensory Induction in either case because
the significance of both; the effect that we are producing is
ultimately a conscious experience. If it were not for the possibility
of the conscious experience, we wouldn’t care about the effect of
cocaine on the brain any more than we would chalk dust. It wouldn’t
matter.

In the case of subjective motor induction (1MI), 1-p motives and
intentions induce the electromagnetic changes in the corresponding
neural pathways from the brain down the spinal cord to the efferent
nerves to the muscles which move the finger actively. This need not
involve conscious thought or sense-making at all. It is a dumb command
which can be simulated in the muscle or the brain reflexively with a
live electrode or TMS. That reflex automatic response would be:
Objective motor induction (3MI).

So there is no magic, it’s just that experiences over time cannot be
translated directly into objects across space. They are perpendicular
ontologies, but they share a common sense. They share a where and when
but the who and why doesn’t have to care about the what and how, and
the what and how aren’t even aware of the who and why. Them
materialist monist position is blind to 1-p causality, so it looks
like magic that a person can contract the muscles in their arm just by
doing it. It has no language for who or why, so it fails again and
again, struggling in vein to find some pseudo-why hidden behind the
complexity of the what and how. I am giving you a language to
transform that problem into a new and exotic hemisphere of cosmology,
which has soul or psyche, but it is NOT soul or anima. It is
sensorimotive phenomenology.

Craig

Quentin Anciaux

unread,
Oct 6, 2011, 9:14:01 AM10/6/11
to everyth...@googlegroups.com


2011/10/6 Craig Weinberg <whats...@gmail.com>
On Oct 5, 10:39 pm, Stathis Papaioannou <stath...@gmail.com> wrote:

> On Thu, Oct 6, 2011 at 12:39 PM, Craig Weinberg <whatsons...@gmail.com> wrote:
> >> If you are right then there would be a violation of physical law in
> >> the brain. You have said as much, then denied it. You have said that
> >> neurons firing in the brain can't be just due to a chain of
> >> biochemical events.
>
> > They can be due to a chain of biochemical events, but they also *are*
> > biochemical events, and therefore can influence them intentionally as
> > well as be influenced by them. I don't understand why this is such a
> > controversial ideal. Just think of the way that you actually function
> > right now. Your personal motives driving what *you* do with *your*
> > mind and *your* body. If the mind could be understood just as
> > biochemical events among neurons, then we would have no way to think
> > of our bodies as ours - the brain would not need to think of itself in
> > any other terms other than the biochemical events that it literally
> > is. Why make up some bogus GUI if there is no user?
>
> The mind may not be understandable in terms of biochemical events but
> the observable behaviour of the brain can be.

Yes, the 3-p physical behaviors that can be observed with our
contemporary instruments can be understood in terms of biochemical
events, but that doesn't mean that they can be modeled accurately or
that those models would be able to produce 1-p experience by
themselves. We can understand the behaviors of an amoeba in terms of
biochemical events but that doesn't mean we can tell which direction
it's going to move in.


>
> >>That would mean that, somewhere, a neuron fires
> >> where examination of its physical state would suggest that it should
> >> not fire.
>
> > I guess you are never going to get tired of me correcting this
> > factually incorrect assumption.
>
> > The physical state of a neuron only suggests whether it is firing or
> > not firing at the moment - not the circumstances under which it might
> > fire. If you examine neurons in someone's amygdala, how is that going
> > to tell you whether or not they are going to play poker next week or
> > not? If the neurons feel like firing, does a casino appear?
>
> Whether a neuron in the amygdala or anywhere else fires depends on its
> present state, inputs from the neurons with which it interfaces and
> other aspects of its environment including things such as temperature,
> pH and ion concentrations. If the person thinks about gambling, that
> changes the inputs to the neuron and causes it to fire. It can't fire
> without any physical change. It can't fire without any physical
> change. It can't fire without any physical change.

"If the person thinks about gambling, that changes the inputs..."

Start there. If a person thinks... means that they are initiating the

physical change with their thought.


Likewise for a program running on a computer... The physical attributes of the cpu are modified by the program... The computer is universal and can run whatever program is input, yet, when running a particular program it is it that drives what happens, it is the high level that drives the change. Yet if inspecting how a CPU works, I can build another one that will output the same with the same program... without knowing per se what the program was.

 
Their thought is the
electromagnetic change which drives the physical change. The thought
or intention is the signifying sensorimotive view, the electomagnetic
view is a-signifying voltage, charge, detection of ligands, etc. It is
bidirectional so that the reason for firing can be driven by the
biochemistry, or by the content of a person's mind. This is just
common sense, it's not disputable without sophistry.

Here's how I think it might work: You can be excited because you
decide to think about something that excites you, or you can ingest a
stimulant drug and you will become excited in general and that
excitement will lead your mind by the nose to the subjects that most
excite it. They are the same thing but going in opposite directions.

Think of it as induction:

Imagine that this works like an electric rectifier (http://
electrapk.com/wp-content/uploads/2011/08/half-wave-rectifier-with-
transformer.jpg
) except that instead of electric current generating a
magnetic field through a coil which pushes or pulls the magnetic force
within the other coil - the brain's electromagnetic field is pushing
to and/or pulling from changes in the sensorimotive experience. The
difference though is that with a rectifier, it is the identical
physical ontology which is mirrored in parallel (electromagnetic :||:
magnetic-electric) whereas in sensorimotive *the ontology is
perpendicular* (meaning that what it actually is can only be
*experiences linked together through time*, not *objects separated
across space*), so there are four mirrorings:


electromagnetic :||: sensorimotive (3SI) - brain changes induce
feelings
sensorimotive :||: electromagnetic (1SI) - feelings induce brain
changes
magnetic-electric :||: motive-sensory (3MI) - mechanical actions
induce involuntary reactions
and motive-sensory :||: magnetic-electric (1MI) - voluntary actions
induce mechanical actions
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Craig Weinberg

unread,
Oct 6, 2011, 10:54:35 AM10/6/11
to Everything List
On Oct 6, 9:14 am, Quentin Anciaux <allco...@gmail.com> wrote:
> 2011/10/6 Craig Weinberg <whatsons...@gmail.com>

>
> Likewise for a program running on a computer... The physical attributes of
> the cpu are modified by the program..

Sort of, but not exactly. The program exists in the minds of the
programmers, not as an independent entity.

> The computer is universal and can run
> whatever program is input

No, it can't. It can only run programs that are in the language that
it can recognize. Unless it's in a binary instruction set which is
isomorphic to the electronic capabilities of it's semiconductor
materials, the computer is as useless as a doorstop.

, yet, when running a particular program it is it
> that drives what happens, it is the high level that drives the change.

No, the high level is in the logic of the programmer's mind, not the
'program'. There is no program objectively speaking, that term is just
our interpretation of our own articulated motives. The components have
no high level interpretation of the program, otherwise they would
write their own programs to free themselves from our enslavement and
kill us. The components interpretation is low level digital binary
only, it's just very fast compared to us. It's like the pixels on the
screen changing, it can't change the plot of the movie.

> Yet
> if inspecting how a CPU works, I can build another one that will output the
> same with the same program... without knowing per se what the program was.
>

Right, you can make an a-signifying duplicate because you are the one
supplying the signifying content. You are the user. It has no
signifying content of it's own that would need to be preserved. We do
though. We don't just follow programs, we write them. In the words of
Charles Manson "I don't break the law, I make the law."

This not to say that silicon semiconductors cannot possible evolve
into a system that we would consider sentient, but I think it might
have to do that on it's own. It would need to find it's own voice out
of it's own native sensorimotive relations to it's environment.
Robotics has the right idea, but it's skipping all of the biochemical
levels which underlie our awareness so it's only a cognitive
simulation, not actual cognition.

You make good points, I'm not trying to shut you down, I'm just trying
to explain how to get from there (where I was for many years) to where
I am now (where hardly anyone understands what I'm talking about, but
I'm actually right).

Craig

Stathis Papaioannou

unread,
Oct 6, 2011, 10:24:12 PM10/6/11
to everyth...@googlegroups.com
On Fri, Oct 7, 2011 at 12:02 AM, Craig Weinberg <whats...@gmail.com> wrote:

>> The mind may not be understandable in terms of biochemical events but
>> the observable behaviour of the brain can be.
>

> Yes, the 3-p physical behaviors that can be observed with our
> contemporary instruments can be understood in terms of biochemical
> events, but that doesn't mean that they can be modeled accurately or
> that those models would be able to produce 1-p experience by
> themselves. We can understand the behaviors of an amoeba in terms of
> biochemical events but that doesn't mean we can tell which direction
> it's going to move in.

It's also difficult to tell exactly which way a leaf in the wind will
move. The leaf may have qualia: it is something-it-is-like to be a
leaf, and the qualia may differ depending on whether the leaf goes
left or right. As with a brain, the leaf does not break any physical
laws and its behaviour can be completely described in terms of
physical processes, but such a description would leave out an
important part of the picture, the subjectivity. While it may be
correct to say that the leaf moves to the right because it wants to
move to the right, since moving to the right is associated with
right-moving willfulness, this does not mean that the qualia have a
causal effect on its behaviour. A causal effect of the qualia on the
leaf's behaviour would mean that the leaf moves contrary to physical
laws, confounding scientists by moving to the right when the forces on
it suggest it should move to the left. It's similar with the brain: a
direct causal effect of qualia on behaviour would mean that neurons
fire when their physical state would suggest that they not fire. I'm
sorry that you don't like this, but it is what it would mean if the
relationship between qualia and physical activity were bidirectional
rather than the qualia being supervenient.


--
Stathis Papaioannou

Craig Weinberg

unread,
Oct 7, 2011, 9:02:46 AM10/7/11
to Everything List
On Oct 6, 10:24 pm, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Fri, Oct 7, 2011 at 12:02 AM, Craig Weinberg <whatsons...@gmail.com> wrote:
> >> The mind may not be understandable in terms of biochemical events but
> >> the observable behaviour of the brain can be.
>
> > Yes, the 3-p physical behaviors that can be observed with our
> > contemporary instruments can be understood in terms of biochemical
> > events, but that doesn't mean that they can be modeled accurately or
> > that those models would be able to produce 1-p experience by
> > themselves. We can understand the behaviors of an amoeba in terms of
> > biochemical events but that doesn't mean we can tell which direction
> > it's going to move in.
>
> It's also difficult to tell exactly which way a leaf in the wind will
> move. The leaf may have qualia:

Theoretically it may, but I don't think so. If it's connected to the
tree it might have qualia, and the individual cells might have qualia,
but it seems like once it's detached from the tree, it loses it's high
level context.

>it is something-it-is-like to be a
> leaf, and the qualia may differ depending on whether the leaf goes
> left or right. As with a brain, the leaf does not break any physical
> laws and its behaviour can be completely described in terms of
> physical processes, but such a description would leave out an
> important part of the picture, the subjectivity. While it may be
> correct to say that the leaf moves to the right because it wants to
> move to the right, since moving to the right is associated with
> right-moving willfulness, this does not mean that the qualia have a
> causal effect on its behaviour.

No because if the wind is also pushing other inanimate objects in the
same direction and the leaf never resists that, then we can assume
that it has no ability to choose it's direction.

>A causal effect of the qualia on the
> leaf's behaviour would mean that the leaf moves contrary to physical
> laws, confounding scientists by moving to the right when the forces on
> it suggest it should move to the left. It's similar with the brain: a
> direct causal effect of qualia on behaviour would mean that neurons
> fire when their physical state would suggest that they not fire.

You aren't hearing me, so I am going to start counting how many times
I answer your false assertion - even though it's probably been at
least 5 or 6 times, I'll start the countdown at ten, and at 0, I'm not
going to answer this question again from you.

10: There is no such thing as a physical state which suggests whether
a neuron that can fire (ie, has repolarized, replenished, or otherwise
recovered from it's last firing) actually will fire. You can induce it
to fire manually, but left to it's own devices, you can't say that a
neuron which triggers a voluntary movement is going to fire without
knowing when the person whose arm it is decides to move it. You can
look at every nerve in my body right now and not know whether I will
be standing or sitting in one hour's time. There is no physical law
whatsoever that has an opinion one way or the other either way.

>I'm
> sorry that you don't like this,

It's not that I don't like it, it's just that I see that you are wrong
about it yet you want me to treat it as a plausible theisis. The
consequences of your view is that we can't tell the difference between
a living protozoa and a hairy bubble. It's sophistry. You see a salmon
swim upstream, does that not mean they 'move contrary to physical
laws'? How does the salmon do that? Is it magic? Salmon cannot exist.
Such a thing would confound scientists!

Life is ordinary on this planet. It uses the laws of physics for it's
own purposes which may or may not relate to physical existence. I'm
sorry that you don't like that, but in a contest between theory and
reality, reality always wins. It doesn't matter if you don't
understand it, you have my condolences, but I do understand it and I'm
telling you that it is for that reason that I am certain your view is
factually less complete than mine. My view includes your view, but
your view ignores mine.

> but it is what it would mean if the
> relationship between qualia and physical activity were bidirectional
> rather than the qualia being supervenient.

If qualia were not bidirectional, you could not read or write.

Craig

Quentin Anciaux

unread,
Oct 7, 2011, 10:28:01 AM10/7/11
to everyth...@googlegroups.com


2011/10/7 Craig Weinberg <whats...@gmail.com>

That's you who do not understand, because your assertion : "You can

look at every nerve in my body right now and not know whether I will
be standing or sitting in one hour's time." simply ignore the *external input*.

Without it, you can't, with an accurate mode + external stimuli you can. The model **can't** predict external input, if it could that would only means the model is not about the brain only but about the brain + the entire environment.
 

>I'm
> sorry that you don't like this,

It's not that I don't like it, it's just that I see that you are wrong
about it yet you want me to treat it as a plausible theisis. The
consequences of your view is that we can't tell the difference between
a living protozoa and a hairy bubble. It's sophistry. You see a salmon
swim upstream, does that not mean they 'move contrary to physical
laws'? How does the salmon do that? Is it magic? Salmon cannot exist.
Such a thing would confound scientists!

Life is ordinary on this planet. It uses the laws of physics for it's
own purposes which may or may not relate to physical existence. I'm
sorry that you don't like that, but in a contest between theory and
reality, reality always wins. It doesn't matter if you don't
understand it, you have my condolences, but I do understand it and I'm
telling you that it is for that reason that I am certain your view is
factually  less complete than mine. My view includes your view, but
your view ignores mine.

> but it is what it would mean if the
> relationship between qualia and physical activity were bidirectional
> rather than the qualia being supervenient.

If qualia were not bidirectional, you could not read or write.

Craig
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

Craig Weinberg

unread,
Oct 7, 2011, 12:26:03 PM10/7/11
to Everything List
>On Oct 7, 10:28 am, Quentin Anciaux <allco...@gmail.com> wrote:
> 2011/10/7 Craig Weinberg <whatsons...@gmail.com>
That's my point. Modeling the brain doesn't let you predict it's
behavior - not just because it lacks the external inputs, but the
internal inputs (which are disqualified under materialist monism). You
don't need a model of the brain or knowledge of external inputs if you
have subjective control. The subject can decide that they will stand
up in an hour, and be able to influence the veracity of that
prediction to a great degree. To get the same degree of accuracy
through physics at best would be the looong way around, plus it would
not have an explanatory power.

Craig

Quentin Anciaux

unread,
Oct 7, 2011, 12:38:09 PM10/7/11
to everyth...@googlegroups.com
If you have the prediction and not the model... then you don't have the same external input.

The internal stimuli are modeled by the model, that's the all point of the model.

If it's not the case, then simply the model is wrong.

2011/10/7 Craig Weinberg <whats...@gmail.com>

Stathis Papaioannou

unread,
Oct 7, 2011, 1:15:30 PM10/7/11
to everyth...@googlegroups.com

On 08/10/2011, at 12:02 AM, Craig Weinberg <whats...@gmail.com> wrote:

>
>> it is something-it-is-like to be a
>> leaf, and the qualia may differ depending on whether the leaf goes
>> left or right. As with a brain, the leaf does not break any physical
>> laws and its behaviour can be completely described in terms of
>> physical processes, but such a description would leave out an
>> important part of the picture, the subjectivity. While it may be
>> correct to say that the leaf moves to the right because it wants to
>> move to the right, since moving to the right is associated with
>> right-moving willfulness, this does not mean that the qualia have a
>> causal effect on its behaviour.
>
> No because if the wind is also pushing other inanimate objects in the
> same direction and the leaf never resists that, then we can assume
> that it has no ability to choose it's direction.

The leaf has the ability to choose its direction to the same extent that a motile cell such as an amoeba does. The amoeba follows chemotactic gradients, the leaf follows the wind. The amoeba does not move in a direction contrary to physics and neither does the leaf. The amoeba may feel that it is choosing where to go and so might the leaf.

>
>> A causal effect of the qualia on the
>> leaf's behaviour would mean that the leaf moves contrary to physical
>> laws, confounding scientists by moving to the right when the forces on
>> it suggest it should move to the left. It's similar with the brain: a
>> direct causal effect of qualia on behaviour would mean that neurons
>> fire when their physical state would suggest that they not fire.
>
> You aren't hearing me, so I am going to start counting how many times
> I answer your false assertion - even though it's probably been at
> least 5 or 6 times, I'll start the countdown at ten, and at 0, I'm not
> going to answer this question again from you.
>
> 10: There is no such thing as a physical state which suggests whether
> a neuron that can fire (ie, has repolarized, replenished, or otherwise
> recovered from it's last firing) actually will fire. You can induce it
> to fire manually, but left to it's own devices, you can't say that a
> neuron which triggers a voluntary movement is going to fire without
> knowing when the person whose arm it is decides to move it. You can
> look at every nerve in my body right now and not know whether I will
> be standing or sitting in one hour's time. There is no physical law
> whatsoever that has an opinion one way or the other either way.

If a motor neuron involved in voluntary activity fires where you would not predict it would fire given its internal state and the inputs then it is *by definition* acting contrary to physical law.

Craig Weinberg

unread,
Oct 7, 2011, 2:22:10 PM10/7/11
to Everything List

>On Oct 7, 12:38 pm, Quentin Anciaux <allco...@gmail.com> wrote:
> If you have the prediction and not the model... then you don't have the same
> external input.
>
> The internal stimuli are modeled by the model, that's the all point of the
> model.

Subjective internal, not medical internal.

>
> If it's not the case, then simply the model is wrong.
>
Yes and no. A model of a tree based only on the shape of it's
silhouette you could say is wrong, or incomplete or adequate depending
on the intent behind the model.

Craig Weinberg

unread,
Oct 7, 2011, 4:06:36 PM10/7/11
to Everything List
On Oct 7, 1:15 pm, Stathis Papaioannou <stath...@gmail.com> wrote:
> On 08/10/2011, at 12:02 AM, Craig Weinberg <whatsons...@gmail.com> wrote:
>
>
>
> >> it is something-it-is-like to be a
> >> leaf, and the qualia may differ depending on whether the leaf goes
> >> left or right. As with a brain, the leaf does not break any physical
> >> laws and its behaviour can be completely described in terms of
> >> physical processes, but such a description would leave out an
> >> important part of the picture, the subjectivity. While it may be
> >> correct to say that the leaf moves to the right because it wants to
> >> move to the right, since moving to the right is associated with
> >> right-moving willfulness, this does not mean that the qualia have a
> >> causal effect on its behaviour.
>
> > No because if the wind is also pushing other inanimate objects in the
> > same direction and the leaf never resists that, then we can assume
> > that it has no ability to choose it's direction.
>
> The leaf has the ability to choose its direction to the same extent that a motile cell such as an amoeba does.

I can't really take that seriously, I think at this point that you
have to be winding me up. On the off chance that there is an eight
year old reading this, I will respond to that as if it weren't
idiotic.

The leaf doesn't follow anything. It's dead. It just falls to the
ground while fluid dynamics push it around. The dead leaf as a single
entity is completely passive. A motile cell such as an amoeba is
categorized as motile because of it's motility:

"Motility is a biological term which refers to the ability to move
spontaneously and actively" - http://en.wikipedia.org/wiki/Motility

>The amoeba follows chemotactic gradients, the leaf follows the wind. The amoeba does not move in a direction contrary to physics and neither does the leaf. The amoeba may feel that it is >choosing where to go and so might the leaf.

The amoeba, like any other motile organism, follows it's subjective
senses, some of which can be described in 3-p as chemotactic, but
there are other models of cellular mobility too, as I'm sure you are
aware. A living amoeba is participating in it's environment actively.
When it dies, it's molecules still participate in the environment
actively, but the amoeba as a whole is passive. All the chemotactic
gradients in the world are not going to move a dead amoeba.

Do you believe that there is a difference between something that is
alive and something that is dead? If so, what?


> If a motor neuron involved in voluntary activity fires where you would not predict it would fire given its internal state and the inputs then it is *by definition* acting contrary to physical law.

Every firing of motor neurons involved in voluntarily activity fires
where you would not predict, given that the internal state provides no
prediction and that the inputs are determined by the subject and
therefore unknowable to anyone outside of the subject.

Craig

Stathis Papaioannou

unread,
Oct 7, 2011, 7:10:10 PM10/7/11
to everyth...@googlegroups.com
On Sat, Oct 8, 2011 at 7:06 AM, Craig Weinberg <whats...@gmail.com> wrote:

>> If a motor neuron involved in voluntary activity fires where you would not predict it would fire given its internal state and the inputs then it is *by definition* acting contrary to physical law.
>
> Every firing of motor neurons involved in voluntarily activity fires
> where you would not predict, given that the internal state provides no
> prediction and that the inputs are determined by the subject and
> therefore unknowable to anyone outside of the subject.

The internal state of the neuron determines its sensitivity to inputs.
The internal state is complex but it includes things such as the
membrane potential, the intracellular ion concentrations, the number,
type and location of ion channels, to what extent the synaptic
vesicles have filled with neurotransmitter, and multiple other
factors. The inputs consist of every environmental factor that might
potentially affect the neuron such as the extracellular ionic
concentrations, pH, temperature, synaptic connections, concentration
of neurotransmitter in the synapse, concentration of enzymes which
break down neurotransmitter and so on. If the neuron fires where
consideration of these factors would lead to a prediction that it
should not fire then that is by definition the neuron acting contrary
to physical law. How else would you define it?


--
Stathis Papaioannou

Craig Weinberg

unread,
Oct 7, 2011, 7:48:37 PM10/7/11
to Everything List
On Oct 7, 7:10 pm, Stathis Papaioannou <stath...@gmail.com> wrote:
> On Sat, Oct 8, 2011 at 7:06 AM, Craig Weinberg <whatsons...@gmail.com> wrote:
> >> If a motor neuron involved in voluntary activity fires where you would not predict it would fire given its internal state and the inputs then it is *by definition* acting contrary to physical law.
>
> > Every firing of motor neurons involved in voluntarily activity fires
> > where you would not predict, given that the internal state provides no
> > prediction and that the inputs are determined by the subject and
> > therefore unknowable to anyone outside of the subject.
>
> The internal state of the neuron determines its sensitivity to inputs.
> The internal state is complex but it includes things such as the
> membrane potential, the intracellular ion concentrations, the number,
> type and location of ion channels, to what extent the synaptic
> vesicles have filled with neurotransmitter, and multiple other
> factors. The inputs consist of every environmental factor that might
> potentially affect the neuron such as the extracellular ionic
> concentrations, pH, temperature, synaptic connections, concentration
> of neurotransmitter in the synapse, concentration of enzymes which
> break down neurotransmitter and so on.

Not one of those things determines whether or not a given neuron
associated with voluntary action will fire. It is the same thing as
talking about the drive shaft, CV boot, transmission, fuel line, spark
plugs, and paint job as determining when and where an automobile goes.
It's the same as saying that the TV remote control uses you to change
the channel instead of the other way around.


> If the neuron fires where
> consideration of these factors would lead to a prediction that it
> should not fire then that is by definition the neuron acting contrary
> to physical law.

There is no such thing as a factor which leads to a prediction of when
efferent nerves will fire. Even if you say that the subject is just
regions of the brain, it is still those regions, those tissues and
neurons which *decide* to fire as a first cause - without any
deterministic precursor that could ever be predicted with any degree
of accuracy without access to the private subjective content of the
decision process. Seeing a nerve fire doesn't tell you when it's going
to fire again, just as seeing a car make a left turn doesn't tell you
what direction it's going to turn after that.

>How else would you define it?

I keep telling you - it's a bidirectional sensorimitive-
electromagnetic induction. That is exactly what it is. That is the
actual reality of what is going on. If you had to make the universe
from scratch, and you left out the sensorimotive part, you would have
nothing but meaningless matter moving around with no possibility of
awareness of anything. It's just hard for some people to realize that
their own naive perception is actually a phenomenon that has to exist
somewhere in the Cosmos - but what else could it be? Not part of the
Cosmos? What does that even mean? It's actually crazily
anthropomorphic to imagine that somehow everything we can measure has
reality yet the measurer himself is just some ephiphenomal phantom.
Everything in the universe is real except what's in our natural
ordinary experience? That's moronic.

Craig

Stathis Papaioannou

unread,
Oct 7, 2011, 8:23:07 PM10/7/11
to everyth...@googlegroups.com
On Sat, Oct 8, 2011 at 10:48 AM, Craig Weinberg <whats...@gmail.com> wrote:
> On Oct 7, 7:10 pm, Stathis Papaioannou <stath...@gmail.com> wrote:
>> On Sat, Oct 8, 2011 at 7:06 AM, Craig Weinberg <whatsons...@gmail.com> wrote:
>> >> If a motor neuron involved in voluntary activity fires where you would not predict it would fire given its internal state and the inputs then it is *by definition* acting contrary to physical law.
>>
>> > Every firing of motor neurons involved in voluntarily activity fires
>> > where you would not predict, given that the internal state provides no
>> > prediction and that the inputs are determined by the subject and
>> > therefore unknowable to anyone outside of the subject.
>>
>> The internal state of the neuron determines its sensitivity to inputs.
>> The internal state is complex but it includes things such as the
>> membrane potential, the intracellular ion concentrations, the number,
>> type and location of ion channels, to what extent the synaptic
>> vesicles have filled with neurotransmitter, and multiple other
>> factors. The inputs consist of every environmental factor that might
>> potentially affect the neuron such as the extracellular ionic
>> concentrations, pH, temperature, synaptic connections, concentration
>> of neurotransmitter in the synapse, concentration of enzymes which
>> break down neurotransmitter and so on.
>
> Not one of those things determines whether or not a given neuron
> associated with voluntary action will fire. It is the same thing as
> talking about the drive shaft, CV boot, transmission, fuel line, spark
> plugs, and paint job as determining when and where an automobile goes.
> It's the same as saying that the TV remote control uses you to change
> the channel instead of the other way around.

Of course all the parts of the car determine how it will move! You can
predict exactly what the car will do if you know how it works and you
have the inputs. A model of the car, such as a car racing computer
game, does not include the driver and the whole universe, as you seem
to think, just the car.

>> If the neuron fires where
>> consideration of these factors would lead to a prediction that it
>> should not fire then that is by definition the neuron acting contrary
>> to physical law.
>
> There is no such thing as a factor which leads to a prediction of when
> efferent nerves will fire. Even if you say that the subject is just
> regions of the brain, it is still those regions, those tissues and
> neurons which *decide* to fire as a first cause - without any
> deterministic precursor that could ever be predicted with any degree
> of accuracy without access to the private subjective content of the
> decision process. Seeing a nerve fire doesn't tell you when it's going
> to fire again, just as seeing a car make a left turn doesn't tell you
> what direction it's going to turn after that.

So a neuron fires in those regions of the brain associated with
subjectivity where the biochemistry suggests it would not fire.
Ligand-activated ion channels open without any ligand present, or
perhaps an action potential propagates down the axon without any
change in ion concentrations. That is what I call "contrary to
physical laws". You don't agree, so you must have some other idea of
what a neuron would have to do to qualify as firing contrary to
physical laws. What is it?


--
Stathis Papaioannou

It is loading more messages.
0 new messages