On Jul 13, 2011, at 7:04 PM, Craig Weinberg <whats...@gmail.com>
wrote:
>> Again, all that matters is that the *outputs* that influence other
>> neurons are just like those of a real neuron, any *internal*
>> processes in the substitute are just supposed to be >artificial
>> simulations of what goes on in a real neuron, so there might be
>> simulated genes (in a simulation running on something like a
>> silicon chip or other future computing >technology) but there'd be
>> no need for actual DNA molecules inside the substitute.
>
> The assumption is that there is a meaningful difference between the
> processes physically within the cell and those that are input and
> output between the cells. That is not my view. Just as the glowing
> blue chair you are imagining now (is it a recliner? A futuristic
> cartoon?) is not physically present in any neuron or group of neurons
> in your skull -
If it is not present physically, then what causes a person to say "I
am imagining a blue chair"?
> under any imaging system or magnification. My idea of
> 'interior' is different from the physical inside of the cell body of a
> neuron. It is the interior topology. It's not even a place, it's just
> a sensorimotive
Could you please define this term? I looked it up but the
definitions I found did not seem to fit.
> awareness of itself and it's surroundings - hanging on
> to it's neighbors, reaching out to connect, expanding and contracting
> with the mood of the collective. This is what consciousness is. This
> is who we are. The closer you get to the exact nature of the neuron,
> the closer you get to human consciousness.
There is such a thing as too low a level. What leads you to believe
the neuron is the appropriate level to find qualia, rather than the
states of neuron groups or the whole brain? Taking the opposite
direction, why not say it must be explained in terms if chemistry or
quarks? What led you to conclude it is the neurons? Afterall, are
rat neurons very different from human neurons? Do rats have the same
range of qualia as we?
> If you insist upon using
> inorganic materials, that really limits the degree to which the
> feelings it can host will be similar.
Assuming qualia supervene on the individual cells or their chemistry.
> Why wouldn't you need DNA to
> feel like something based on DNA in practically every one of it's
> cells?
You would have to show that the presence of DNA in part determines the
evolution of the brains neural network. If not, it is as relevant to
you and your mind as the neutrinos passing through you.
>
>
>> The idea is just that *some* sufficiently detailed digital
>> simulation would behave just like real neurons and a real brain,
>> and "functionalism" as a philosophical view says that this
>> >simulation would have the same mental properties (such as qualia,
>> if the functionalist thinks of "qualia" as something more than just
>> a name for a certain type of physical process) >as the original brain
>
> A digital simulation is just a pattern in an abacus.
The state of an abacus is just a number, not a process. I think you
may not have a full understanding of the differences between a turing
machine and a string of bits. A Turing machine can mimick any process
that is defineable and does not take an infinite number of steps.
Turing machines are dynamic, self-directed entities. This
distinguishes them from cartoons, YouTube videos and the state if an
abacus.
Since they have such a universal capability to mimick processes, then
the idea that the brain is a process leads naturally to the idea of
intelligent computers which could function identically to organic
brains.
Then, if you deny the logical possibilitt of zombies, or fading
qualia, you must accept such an emulation of a human mind would be
equally conscious.
> If you've got a
> gigantic abacus and a helicopter, you can make something that looks
> like whatever you want it to look like from a distance, but it's still
> just an abacus. It has no subjectivity beyond the physical materials
> that make up the beads.
The idea behind a computer simulation of a mind is not to make
something that looks like a brain but to make something that behaves
and works like a brain.
>
>
>> Everything internal to the boundary of the neuron is simulated,
>> possibly using materials that have no resemblance to biological ones.
>
> It's a dynamic system,
So is a turing machine.
> there is no boundary like that. The
> neurotransmitters are produced by and received within the neurons
> themselves. If something produces and metabolizes biological
> molecules, then it is functioning at a biochemical level and not at
> the level of a digital electronic simulation. If you have a heat sink
> for your device it's electromotive. If you have an insulin pump it's
> biological, if you have a serotonin reuptake receptor, it's
> neurological.
>
>> So if you replace the inside of one volume with a very different
>> system that nevertheless emits the same pattern of particles at the
>> boundary of the volume, systems in other >adjacent volumes "don't
>> know the difference" and their behavior is unaffected.
>
> No, I don't that's how living things work. Remember that people's
> bodies often reject living tissue transplanted from other human
> beings.
Rejection requires the body knowing there is a difference, which is
against the starting assumption.
>
>
>> You didn't address my question about whether you agree or disagree
>> with physical reductionism in my last post, can you please do that
>> in your next response to me?
>
> I agree with physical reductionism as far as the physical side of
> things is concerned. Qualia is the opposite that would be subject to
> experiential irreductionism. Which is why you can print Shakespeare on
> a poster or a fortune cookie and it's still Shakeapeare, but you can't
> make enriched uranium out of corned beef or a human brain out of table
> salt.
>
>> Because I'm just talking about the behavioral aspects of
>> consciousness now, since it's not clear if you actually accept or
>> reject the premise that it would be possible to replace >neurons
>> with functional equivalents that would leave *behavior* unaffected
>
> I'm rejecting the premise that there is a such thing as a functional
> replacement for a neuron that is sufficiently different from a neuron
> that it would matter.
I pasted real life counter examples to this. Artificial cochlea and
retinas.
> You can make a prosthetic appliance which your
> nervous system will make do with, but it can't replace the nervous
> system altogether.
At what point does the replacement magically stop working?
> The nervous system predicts and guesses. It can
> route around damage or utilize a device which it can understand how to
> use.
So it can use an artificial retina but not an artificial neuron?
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>
> hl=en.
>
>If it is not present physically, then what causes a person to say "IA sensorimotive circuit. A sensory feeling which is a desire to
>am imagining a blue chair"?
fulfill itself through the motive impulse to communicate that
statement.
Nerves are referred to as afferent and efferent also. My idea is that
>Could you please define this term? I looked it up but the
>definitions I found did not seem to fit.
all nerve functionality is sense (input) and motive (output). I would
say motor, but it's confusing because something like changing your
mind or making a choice is motive but not physically expressed as
motor activity, but I think that they are the same thing. I am
generalizing what nerves do to the level of physics, so that our
nerves are doing the same thing that all matter is doing, just
hypertrophied to host more meta-elaborated sensorimotive phenomena.
I didn't say it was. I was just talking about the more similar you can
>There is such a thing as too low a level. What leads you to believe
>the neuron is the appropriate level to find qualia, rather than the
>states of neuron groups or the whole brain?
get to imitating a human neuron, the more similar a brain based on
that imitation will be to having the potential for human
consciousness.
Chromosome mutations cause mutations in the brain's neural network, do
>You would have to show that the presence of DNA in part determines the
>evolution of the brains neural network. If not, it is as relevant to
>you and your mind as the neutrinos passing through you.
they not?
btw, I don't interpret neutrinos, photons, or other massless
chargeless phenomena as literal particles. QM is a misinterpretation.
Accurate, but misinterpreted.
A pattern is not necessarily static, especially not an abacus, the
>> A digital simulation is just a pattern in an abacus.
>The state of an abacus is just a number, not a process. I think you
>may not have a full understanding of the differences between a turing
>machine and a string of bits. A Turing machine can mimick any process
>that is defineable and does not take an infinite number of steps.
>Turing machines are dynamic, self-directed entities. This
>distinguishes them from cartoons, YouTube videos and the state if an
>abacus.
purpose of which is to be able to change the positions to any number.
Just like a cartoon.
If you are defining Turing machines as self-
directed entities then you have already defined them as conscious, so
it's a fallacy to present it as a question.
Since I think that a
machine cannot have a self, but is instead the self's perception of
the self's opposite, I'm not compelled by any arguments which imagine
that purely quantitative phenomena (if there were such a thing) can be
made to feel.
These ideas are not applicable in my model of consciousness and it's
>Then, if you deny the logical possibilitt of zombies, or fading
>qualia, you must accept such an emulation of a human mind would be
>equally conscious.
relation to neurology.
I think that for it to work exactly like a brain it has to be a brain.
>The idea behind a computer simulation of a mind is not to make
>something that looks like a brain but to make something that behaves
>and works like a brain.
If you want something that behaves like an intelligent automaton, then
you can use a machine made of inorganic matter.
If you want something
that feels and behaves like a living organism
then you have to create
a living organism out of matter that can self replicate and die.
If you are already defining something as biologically identical, then
>Rejection requires the body knowing there is a difference, which is
>against the starting assumption.
you are effectively asking 'if something non-biological were
biological, would it perform biological functions?'
Those are not replacements for neurons,
>I pasted real life counter examples to this. Artificial cochlea and
>retinas.
they are prostheses for a
nervous system. Big difference.
I can replace a car engine with
horses, but I can't replace a horse's brain with a car engine.
At what point does cancer magically stop you from waking up?
>At what point does the replacement magically stop working?
>So it can use an artificial retina but not an artificial neuron?A neuron can use an artificial neuron but a person can't use an
artificial neuron except through a living neuron.
I don't want to talk about inner experience. I want to talk about my fundamental reordering of the cosmos, which if it were correct, would be staggeringly important and I have not seen anywhere else:
- Mind and body are not merely separate, but perpendicular topologies of the same ontological continuum of sense.
- The interior of electromagnetism is sensorimotive, the interior of determinism is free will, and the interior of general relativity is perception.
- Quantum Mechanics is a misinterpretation of atomic quorum sensing.
- Time, space, and gravity are void. Their effects are explained by perceptual relativity and sensorimotor electromagnetism.
- The "speed of light" c is not a speed it's a condition of nonlocality or absolute velocity, representing a third state of physical relation as the opposite of both stillness and motion.
It's not about meticulous logical deduction, it's about grasping the largest, broadest description of the cosmos possible which doesn't leave anything out. I just want to see if this map flies, and if not, why not?
No it is worst, I'm afraid. I hope you don't mind when I am being
frank. In fundamental matter, you have to explain things from scratch.
Nothing can be taken for granted, and you have to put your assumptions
on the table, so that we avoid oblique comments and vocabulary
dispersion.
You say yourself that you don't know if you talk literally or
figuratively. That's says it all, I think. You should make a choice,
and work from there. Personally, I am a literalist, that is I am
applying the scientific method. That is, for the mind-body problem,
actually the hard part for scientist, consists in understanding that
once we assume the comp hyp, we can translate "philosophical problems"
into "mathematical and/or physical problems".
Philosophers don't like that (especially continental one), but this
fits with their usual tradition of defending academic territories and
position (food). It is natural, like in (pseudo)-religion, they are
not very happy when people use the scientific methods to invade their
fields of study.
But this means that, in interdisciplinary research, you must be able
to be understood by a majority in each field you are crossing. Even
when you are successful on this, you will have to find the people
having the courage to study the connection between the domains.
A lot of scientists still believe that notion like mind,
consciousness, are crackpot notion, and when sincere people try to
discuss on those notions, you can be amazed by the tons of
difficulties. I have nothing against some attempts toward a
materialist solution of the MB P., and in that case at least we know
(or should know, or refute ...) that we have to abandon even
extremally weak version of mechanism. But then, this looks like
introducing special (and unknown) infinities in the MB puzzle, so I am
not interested, without providing some key motivation.
In this list people are open minded for the "everything exists" type
of theories, like Everett Many-Worlds, with an open mind on
computationalism (Schmidhuber) and mathematicalism or immaterialism
(Tegmark). So my own contribution was well suited, given that I
propose an argument showing that if we believe that we can survive
with a digitalizable body, then we dispose, ONLY, of a, yet, very
solid constructive, and highly complex structured, version of an
"everything": all computations, (in the precise arithmetical sense of
sigma_1 arithmetical relations, and their (coded) proofs. I show also
that we dispose of a very natural notion of observers, the universal
machines, and that among them we can already "interview" those which
can prove, know, guess, feel about their internal views on realities.
Everett's move to embed the physicist subject *in* the object matter
of the physical equation (SWE) extends itself in the arithmetical
realm, with the embedding of the mathematician *in* arithmetic, once
we take the possibility of our local digitalization seriously enough
into consideration.
This shows mainly that, with comp, the mind-body problem is two times
more complex than what people usually think. Not only we have to
explain qualia/consciousness from the number, but we have to explain
quanta/matter from the numbers too.
But universal machine have a natural theory of thought (the laws of
Boole), and a natural theory of mind (the Gödel-Löb-Solovay logics of
self-reference), and by the very existence of computer science, in
fine, you get a translation of the body problem in computer science,
which makes it automatically a problem in number theory.
Bruno
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>
> It's not about whether other cells would sense the imposter neuron,
> it's
> about how much of an imposter the neuron is. If acts like a real cell
> in
> every physical way, if another organism can kill it and eat it
> and
> metabolize it completely then you pretty much have a cell. Whatever
> cannot
> be metabolized in that way is what potentially detracts from the
> ability to
> sustain consciousness. It's not your cells that need to sense DNA,
> it's the
> question of whether a brain composed entirely of, or significantly of
> cells
> lacking DNA would be conscious in the same way as a
> person.
DNA doesn't play a direct role in neuronal to neuronal interaction. It
is necessary for the synthesis of proteins, so without it the neuron
would be unable to, for example, produce more surface receptors or the
essential proteins needed for cell survival; however, if the DNA were
destroyed the neuron would carry on functioning as per usual for at
least a few minutes. Now, you speculate that consciousness may somehow
reside in the components of the neuron and not just in its function,
so that perhaps if the DNA were destroyed the consciousness would be
affected - let's say for the sake of simplicity that it too would be
destroyed - even in the period the neuron was functioning normally. If
that is so, then if all the neurons in your visual cortex were
stripped of their DNA you would be blind: your visual qualia would
disappear. But if all the neurons in your visual cortex continued to
function normally, they would send the normal signals to the rest of
your brain and the rest of your brain would behave as if you could
see: that is, you would accurately describe objects put in front of
your eyes and honestly believe that you had normal vision. So how
would this state, behaving as if you had normal vision and believing
you had normal vision, differ from actually having normal vision; or
to put it differently, how do you know that you aren't blind and
merely deluded about being able to see?
--
Stathis Papaioannou
I think there could be differences in how vision is perceived if all
of the visual cortex lacked DNA, even if the neurons of the cortex
exhibited superficial evidence of normal connectivity. A person could
be dissociated from the images they see, feeling them to be
meaningless or unreal, seen as if in third person or from malicious
phantom/alien eyeballs. Maybe it would be more subtle...a sensation of
otherhanded sight, or sight seeming to originate from a place behind
the ears rather than above the nose. The non-DNA vision could be
completely inaccessible to the conscious mind, a psychosomatic/
hysterical blindness, or perhaps the qualia would be different,
unburdened by DNA, colors could seem lighter, more saturated like a
dream. The possibilities are endless. The only way to find out is to
do experiments.
DNA may not play a direct role in neuronal to neuronal interaction,
but the same could be said of perception itself. We have nothing to
show that perception is the necessary result of neuronal interaction.
The same interactions could exist in a simulation without any kind of
perceived universe being created somewhere. Just because the behavior
of neurons correlates with perception doesn't mean that their behavior
alone causes perception. Materials matter. A TV set made out of
hamburger won't work.
What I'm trying to say is that the sensorimotive experience of matter
is not limited to the physical interior of each component of a cell or
molecule, but rather it is a completely other, synergistic topology
which is as diffuse and experiential as the component side is discrete
and observable. There is a functional correlation, but that's just
where the two topologies intersect. Many minor physical changes to the
brain can occur without any noticeable differences in perception -
sometimes major changes, injuries, etc. Major changes in the psyche
can occur without any physical precipitate - reading a book may
unleash a flood of neurotransmitters but the cause is semantic, not
biochemical.
The requirement is that the artificial neurons interact with the
biological neurons in the normal way, so that the biological neurons
can't tell that they are imposters. This is a less stringent
requirement than making artificial neurons that are indistinguishable
from biological neurons under any test whatsoever. In the example I
gave before, removing the DNA from a neuron would at least for a few
minutes continue behaving normally so the surrounding neurons would
not detect that anything had changed, whereas an electron micrograph
might easily show the difference.
-- Stathis Papaioannou
No, but it means the chicken head isn't necessary to walking - just like
DNA isn't necessary to consciousness.
Brent
Interaction with the world. Information processing. Memory. A point
of view; i.e. model of the world including self. Purpose/values.
Brent
I think you have failed to address the point made by several people so
far, which is that if the replacement neurons can interact with the
remaining biological neurons in a normal way, then it is not possible
for there to be a change in consciousness. The important thing is
**behaviour of the replacement neurons from the point of view of the
biological neurons**.
--
Stathis Papaioannou
Better than magic topology.
Brent
A neuron doesn't see anything. They don't have a "point of view".
> You
> can't presume to know that behavior is independent of context.
If behavoir is independent of context it isn't even intelligent, much
less conscious.
> If you
> consider the opposite scenario, at what point do you consider a
> microelectronic configuration conscious? How many biological neurons
> does it take added to a computer before it has it's own agenda?
>
That's like asking how many NP junctions have to added to make a
computer. It's a matter of organization, not just numbers.
Brent
I think you're still missing the point. Forget about consciousness for
the moment and consider only the mechanical aspect of the brain. By
analogy consider a car: we replace parts that wear out with new parts
that function equivalently. If we replace the sparkplugs as long as
the new ones screw in properly and have the right electrical
properties it doesn't matter if they are a different shape or colour.
The proof of this is that car is observed to function normally under
all circumstances. Similarly with the brain, we replace some existing
neurons with modified or artificial neurons that function identically.
No doubt it would be difficult to make such neurons, but *provided*
they can be made and appropriately installed, the behaviour of the
entire brain will be the same, and *therefore* the consciousness will
be the same. Do you agree with this, or not?
--
Stathis Papaioannou
And interfacing biological neurons with non biological circuits is not
sci.fi., nowadays.
http://www.youtube.com/watch?v=1-0eZytv6Qk&feature=related
http://www.youtube.com/watch?v=1QPiF4-iu6g&feature=fvwrel
http://www.youtube.com/watch?v=-EvOlJp5KIY
This is NOT a proof, nor even strong evidences for computationalism,
but it is strong evidence that humans will believe in comp, and
practice it, no matter what.
Bruno
> On 7/20/2011 2:59 PM, Craig Weinberg wrote:
>> What does consciousness require?
>>
>
> Interaction with the world.
But what is a world? Also, assuming computationalism, you need only to
believe that you interact with a "world/reality", whatever that is,
like in dream. If not you *do* introduce some magic in both
consciousness and world.
> Information processing. Memory. A point of view; i.e. model of the
> world including self. Purpose/values.
OK. Although "conscious purpose" is already a high form of
consciousness, it might be self-consciousness.
Bruno
> I don't have a problem with living neurological systems extending
> their functionality with mechanical prosthetics, it's the other way
> around that is more of an issue. People driving cars doesn't mean cars
> driving human minds.
Sure, but we do both: robots with neurons, and animals, including
humans, with the brain partially replaced by artificial neurons.
Anyway, if you think molecules are needed, that is, that the level of
substitution includes molecular activity, this too can be emulated by
a computer. The only way to negate computationalism consists in
pretending there is some NON Turing-emulable activity going on in the
brain, and relevant for consciousness. In that case, there is no
possible level of digital substitution.
Note that all physical phenomena known today are Turing emulable,
even, in some sense, quantum indeterminacy (in the QM without
collapse) where the indeterminacy is a first person view of a
digitalisable self-multiplication experiment.
All what consciousness (and matter) needs is a sufficiently rich
collection of self-referential relations. It happens that the numbers,
by the simple laws of addition and multiplication provides already
just that. Adding some ontological elements can only make the mind
body problem more complex to even just formulate.
Bruno
>>
>>> On 7/20/2011 2:59 PM, Craig Weinberg wrote:
>>>> What does consciousness require?
>>
>>> Interaction with the world.
>>
>> But what is a world? Also, assuming computationalism, you need only
>> to
>> believe that you interact with a "world/reality", whatever that is,
>> like in dream. If not you *do* introduce some magic in both
>> consciousness and world.
>>
>>> Information processing. Memory. A point of view; i.e. model of the
>>> world including self. Purpose/values.
>>
>> OK. Although "conscious purpose" is already a high form of
>> consciousness, it might be self-consciousness.
>
> Consciousness is nothing more than the elaborated experience of
> feeling.
OK.
> The world it interacts with does not have to make any
> objective sense, requires no information processing or memory, purpose
> or value.
OK.
> Pain is consciousness.
OK.
> It need not contain any information
> beyond a projection of the possibility of it's cessation.
OK.
> It is a self-
> explanatory, innate, first person experience
OK.
> that doesn't need any
> complex logic behind it,
Why? This is just like saying "we can't explain it". I am OK with
that, but then I look for better definitions and assumptions, with the
goal of at least finding an explanation of why it seems like that, or
why there is no explanation. Without this, it is like invoking the
will of God, and adding "don't search for an explanation".
> nor will any amount of logic necessarily
> alleviate it directly.
I agree. Most people will say that logic will just add a layer of
headache :)
Still, we need logic, and *some* theory to explain why we cannot
explain directly the first person sensations.
> You can't always reason with pain.
Right. It is not a reason type of thing. But there might be a (meta)
reason for that.
> Pain cannot
> be simulated quantitatively in any way.
How do you know?
> There is no equation, game, or
> purely inorganic quantitative system that has ever felt pain or will
> ever feel what we know as pain.
You remind me of the Spanish christians arguing that south american
indians have no souls. You can rape and enslave them at will: it is
not a sin! (To be sure they *did* eventually conclude, at the
Valladolid meeting, that they have a soul, so that it was necessary to
convert them to save them from hell).
(That's why the "spirit" of the Salvia divinorum plant became known as
the Virgin Mary!)
> Without first hand experience of the
> difference between pain and pleasure, there can be no animal level of
> consciousness.
I am OK with this. But I do think plausible that you can emulate
digitally first hand experiences of pain and pleasure. Then 'real'
human-like pain, which can last for a time, will need the whole
(arithmetical) truth to be stable on its many 'futures'. Our first
person experiences are non computably distributed on an infinite
structure, but that is a consequence of its digitalness at some level.
Bruno
>
> On Jul 21, 5:55 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>> On 21 Jul 2011, at 00:14, meekerdb wrote:
>>
>>> On 7/20/2011 2:59 PM, Craig Weinberg wrote:
>>>> What does consciousness require?
>>
>>> Interaction with the world.
>>
>> But what is a world? Also, assuming computationalism, you need only
>> to
>> believe that you interact with a "world/reality", whatever that is,
>> like in dream. If not you *do* introduce some magic in both
>> consciousness and world.
>>
>>> Information processing. Memory. A point of view; i.e. model of the
>>> world including self. Purpose/values.
>>
>> OK. Although "conscious purpose" is already a high form of
>> consciousness, it might be self-consciousness.
>>
>> Bruno
>>
>> http://iridia.ulb.ac.be/~marchal/
>
So I need to believe some magic or I have to introduce some magic. That
seems a distinction without a difference.
>
>
>
>> Information processing. Memory. A point of view; i.e. model of the
>> world including self. Purpose/values.
>
> OK. Although "conscious purpose" is already a high form of
> consciousness, it might be self-consciousness.
>
> Bruno
I think there are different kinds and levels of consciousness,
awareness, cogitation. Purpose need not be something reflected on.
Even simple animals have purpose hardwired in their genes.
Brent
There was a lot made of the perceived difference in digital music when
CDs first came out, in the audiophile communities particularly. I do
think that a subtle difference can be detected but hard to know
whether it's the digital nature itself or the processing, mixing,
playback equipment, confirmation bias, etc. Digital music seems
harsher, more sibilant and shallow on the percussion. It doesn't
bother me much, but I think there could be a legitimate, if subtle
difference stemming from the pure conversion of analog waveforms to
digital samples.
I agree there would be a level at which digital recording is
indistinguishable from analog recording, but I think that it's due to
the intentional gating of the sense through the psyche and media path
rather than the limitations of nerve cells firing. The nerve cells
themselves may experience a huge range of sensitivity which we have no
conscious access to - the cochlea, maybe even more. Talking about raw
sensation here, not depth/richness of interpretative qualia.
Isn't that the definition of a physical signal.
> There is no physical signal there, it's
> just an event being shared sequentially amongst materials.
>
> If you look at it from the inside out instead, the psyche is picking
> up the analog modulation of the cilia, cochlea as a whole, and to some
> extent the gestalt sense of the entire aural, physical event external
> to the ear through the sensitivity of the auditory nerves. The entire
> media path is collapsed, or as I say, cumulatively entangled, so that
> the psyche is itself semantically altered to conform to the sense of
> the sound event while preserving subtle traces of the entire
> interstitial media path. This experiential description is every bit as
> 'real' as the outside in,
No it's not. It implies, for example, that replacing dysfunctional
cochlea by an electronic device that stimulated the auditory nerve would
not produce hearing - but it does. The entire media path (in which for
some reason you left out the sound waves and their source) consists of
separable physical components.
> and for most purposes much more relevant as
> it is the signifying content of the sound that we care about, rather
> than the a-signifying, generic form of it's transfer.
>
> I agree there would be a level at which digital recording is
> indistinguishable from analog recording, but I think that it's due to
> the intentional gating of the sense through the psyche and media path
> rather than the limitations of nerve cells firing. The nerve cells
> themselves may experience a huge range of sensitivity which we have no
> conscious access to - the cochlea, maybe even more. Talking about raw
> sensation here, not depth/richness of interpretative qualia.
>
What sense does it make to talk about sensations of our nerve cells
which we have no access to. Who does have access to them? If no one
does then in what sense are they "sensations"? Of course you may
speculate that each nerve cell itself experiences some sensation, and
each molecule in the nerve cell, and each quark in each atom, and the
atoms of the atmosphere that carry the sound wave - but you could also
speculate that pigs will fly. The question is, "What's the evidence?"
Brent
No doubt it would be technically difficult to make an artificial
replacement for a neuron in a different substrate, but there is no
theoretical reason why it could not be done, since there is no
evidence for any magical processes inside neurons. The argument is
that IF an artificial neuron could be made which would replicate the
behaviour of a biological neuron well enough to slot into position in
a brain unnoticed THEN the consciousness of that brain would be
unaffected. If not, a bizarre situation would arise where
consciousness could change or disappear (eg., going blind) without the
subject noticing. Can you address this particular point?
--
Stathis Papaioannou
>> if you think molecules are needed, that is, that the level of
>> substitution includes molecular activity, this too can be emulated by
>> a computer
>
> But it can only be emulated in a virtual environment interfacing with
> a computer literate human being though.
Why. That's begging the question.
> A real mouse will not be able
> to live on virtual cheese.
But a virtual mouse will (I will talk *in* the comp theory).
> Why can't consciousness be considered
> exactly the same way, as an irreducible correlate of specific meta-
> meta-meta-elaborations of matter?
What do you mean by matter? Primitive matter does not exist. A TOE has
to explain where the belief in matter comes from without assuming it.
>
>> All what consciousness (and matter) needs is a sufficiently rich
>> collection of self-referential relations. It happens that the
>> numbers,
>> by the simple laws of addition and multiplication provides already
>> just that. Adding some ontological elements can only make the mind
>> body problem more complex to even just formulate.
>
> Information is not consciousness. Energy is the experience of being
> informed and informing, but it is not information.
I agree.
> This is why a brain
> must be alive and conscious (not in a coma) to be informed or inform,
> and why a computer must be turned on to execute programs, or a
> mechanical computing system has to have kinetic initialization, etc.
Not at all. All you need are relative genuine relations. That does
explain both the origin of quanta and qualia, including the difference
of the quantitative and the qualitative.
> The path that energy takes determines the content of the experience to
> some extent, but it is the physical nature of the materials through
> which the continuous sense of interaction occurs which determine the
> quality or magnitude of possible qualitative elaboration (physical,
> chemo, bio, zoo-physio, neuro, cerebral) of that experience.
How?
> Physical
> will take you to detection, chemo to sense, bio to feeling, zoo to
> emotion, neuro to cognition, cerebral to full abstraction (colloquial
> terms here, not asserting a formal taxonomy).
You say so, but my point is that if you assume matter, your theory
needs very special form of infinities. Which one?
> All are forms of
> awareness. Consciousness implies awareness of awareness
That is self-consciousness.
> which maybe
> comes at the neuro or cerebral level, maybe lower? It has nothing to
> do with the complexity of the path that the energy takes. Complexity
> is an experience, not a discrete ontological condition.
You need infinities to make complexity an experience, and that is like
putting the horse behind the car.
>
>> Adding some ontological elements can only make the mind
>> body problem more complex to even just formulate.
>
> This makes me think that you are sentimental about protecting the
> simplicity of an abstract formula, rather than faithfully representing
> the problem.
I was mentioning the mind-body problem. No formula was involved. You
put infinities and uncomputability everywhere, where comp put it in
very special place with complete justification.
> I'm not especially interested in the 'easy' problem of
> consciousness.
Me neither.
> It's a worthwhile problem, to be sure, it's just not my
> thing. I do think, however, that if we can accurately describe the
> pattern of what the hard problem seems to arise from, it may have
> implications for both the easy and hard problems. At worst, my view
> limits the aspirations of inorganic materials to simulate
> consciousness,
That is vitalism. It fails to explain anything. It makes the problem
less tractable. It is similar to the God of the gap. Comp explains why
there is a gap. I am not sure you study the theory.
> but I don't see that as anything more than an
> identification of how the cosmos works. We don't want to create
> consciousness, we can do that already by reproducing. We want an
> omnipotent glove for the hand of consciousness that we already have.
> That seems easier to accomplish if we are not convincing ourselves
> that feelings must be numbers.
Comp explains completely why feelings are NOT numbers. You don't study
the theory, and you criticize only your own prejudice about numbers
and machines.
You can use non-comp, as you seem to desire, but then tell us what is
not Turing emulable in "organic matter"?
Bruno
> On 7/21/2011 2:55 AM, Bruno Marchal wrote:
>>
>> On 21 Jul 2011, at 00:14, meekerdb wrote:
>>
>>> On 7/20/2011 2:59 PM, Craig Weinberg wrote:
>>>> What does consciousness require?
>>>>
>>>
>>> Interaction with the world.
>>
>> But what is a world? Also, assuming computationalism, you need only
>> to believe that you interact with a "world/reality", whatever that
>> is, like in dream. If not you *do* introduce some magic in both
>> consciousness and world.
>
>
> So I need to believe some magic or I have to introduce some magic.
> That seems a distinction without a difference.
With comp the only magic is 0, 1, 2, 3, + addition + multiplication.
With non-comp the magic is primitive matter + primitive physical laws
+ primitive consciousness + non intelligible links between all those
things.
>
>>
>>
>>
>>> Information processing. Memory. A point of view; i.e. model of
>>> the world including self. Purpose/values.
>>
>> OK. Although "conscious purpose" is already a high form of
>> consciousness, it might be self-consciousness.
>>
>> Bruno
>
> I think there are different kinds and levels of consciousness,
> awareness, cogitation. Purpose need not be something reflected on.
> Even simple animals have purpose hardwired in their genes.
OK. I was talking on the conscious purpose. Not on God, or Matter
"purpose".
Bruno
On 7/22/2011 4:58 AM, Bruno Marchal wrote:
>
> On 21 Jul 2011, at 16:08, Craig Weinberg wrote:
>
>>> if you think molecules are needed, that is, that the level of
>>> substitution includes molecular activity, this too can be emulated by
>>> a computer
>>
>> But it can only be emulated in a virtual environment interfacing with
>> a computer literate human being though.
>
> Why. That's begging the question.
>
>
Bruno has a strong point here. So long as one is dealing with a system
that can be described such that that description can be turned into a
recipe to represent all aspects of the system, then it is, by definition
computable!
>
>> A real mouse will not be able
>> to live on virtual cheese.
>
> But a virtual mouse will (I will talk *in* the comp theory).
Virtual mice eat virtual cheese and get virtual calories from it! Be
careful that your not forcing a multi-leveled concept into a single
conceptual level.
>
>
>
>> Why can't consciousness be considered
>> exactly the same way, as an irreducible correlate of specific meta-
>> meta-meta-elaborations of matter?
>
> What do you mean by matter? Primitive matter does not exist. A TOE has
> to explain where the belief in matter comes from without assuming it.
>
>
OK, Bruno, would you stop saying that unless you explicitly explain what
you mean by "primitive matter"? The point that "A TOE has to explain
where the belief in matter comes from without assuming it" is very
important, though, but you might agree that that kind of multi-leveled
TOE is foreign to most people. Not many people consider that a Theory of
Everything must contain not only a representation of waht is observed
but also the means and methods of the observations there of, or else it
is not a theory of *Everything*. This actually makes the concept of a
TOE subject to Incompleteness considerations!
>
>>
>>> All what consciousness (and matter) needs is a sufficiently rich
>>> collection of self-referential relations. It happens that the numbers,
>>> by the simple laws of addition and multiplication provides already
>>> just that. Adding some ontological elements can only make the mind
>>> body problem more complex to even just formulate.
>>
>> Information is not consciousness. Energy is the experience of being
>> informed and informing, but it is not information.
>
> I agree.
>
>
Indeed!
>
>
>> This is why a brain
>> must be alive and conscious (not in a coma) to be informed or inform,
>> and why a computer must be turned on to execute programs, or a
>> mechanical computing system has to have kinetic initialization, etc.
>
> Not at all. All you need are relative genuine relations. That does
> explain both the origin of quanta and qualia, including the difference
> of the quantitative and the qualitative.
>
But Bruno, you are side-stepping the vital question of persistance and
transitivity in that notion of "genuine relations." One's TOE has to
account for the appearance of time, even it it is not primitive. It is
not enough to show that matter is not primitive, we have to show how the
image of an evolving matter universe is possible. So far we are
implying it via diamonds, but diamonds do not map in ways that are
necessary to code interactions.
>
>> The path that energy takes determines the content of the experience to
>> some extent, but it is the physical nature of the materials through
>> which the continuous sense of interaction occurs which determine the
>> quality or magnitude of possible qualitative elaboration (physical,
>> chemo, bio, zoo-physio, neuro, cerebral) of that experience.
>
>
> How?
>
>
Umm, Craig, no. Energy is defined by the path of events of the
interaction. This is why the word "action" is used. We have a notion of
least action which relates to the minimum configuration of a system, the
content of the experience *is* the "inside view" of the process that
strives always for that minimum.
>
>
>> Physical
>> will take you to detection, chemo to sense, bio to feeling, zoo to
>> emotion, neuro to cognition, cerebral to full abstraction (colloquial
>> terms here, not asserting a formal taxonomy).
>
> You say so, but my point is that if you assume matter, your theory
> needs very special form of infinities. Which one?
>
>
Could explain this necessity, Bruno?
>
>
>> All are forms of
>> awareness. Consciousness implies awareness of awareness
>
> That is self-consciousness.
Consciousness does not require a model of self that is integrated into
the content of consciousness, therefore consciousness is not reflexive
in the primitive sense.
>
>
>
>> which maybe
>> comes at the neuro or cerebral level, maybe lower? It has nothing to
>> do with the complexity of the path that the energy takes. Complexity
>> is an experience, not a discrete ontological condition.
>
> You need infinities to make complexity an experience, and that is like
> putting the horse behind the car.
>
>
Please explain this.
>
>>
>>> Adding some ontological elements can only make the mind
>>> body problem more complex to even just formulate.
>>
>> This makes me think that you are sentimental about protecting the
>> simplicity of an abstract formula, rather than faithfully representing
>> the problem.
>
> I was mentioning the mind-body problem. No formula was involved. You
> put infinities and uncomputability everywhere, where comp put it in
> very special place with complete justification.
>
>
>
>> I'm not especially interested in the 'easy' problem of
>> consciousness.
>
> Me neither.
>
>
>
>> It's a worthwhile problem, to be sure, it's just not my
>> thing. I do think, however, that if we can accurately describe the
>> pattern of what the hard problem seems to arise from, it may have
>> implications for both the easy and hard problems. At worst, my view
>> limits the aspirations of inorganic materials to simulate
>> consciousness,
>
> That is vitalism. It fails to explain anything. It makes the problem
> less tractable. It is similar to the God of the gap. Comp explains why
> there is a gap. I am not sure you study the theory.
>
>
OTOH, Bruno. one cannot gloss over the way that quantum logic is
non-distributive. Reducing all to combinators or numbers that do not
involve this seems doomed from the start. it is as if we dissolve
everything into a soup and say: See, Existence is soup!
>
>> but I don't see that as anything more than an
>> identification of how the cosmos works. We don't want to create
>> consciousness, we can do that already by reproducing. We want an
>> omnipotent glove for the hand of consciousness that we already have.
>> That seems easier to accomplish if we are not convincing ourselves
>> that feelings must be numbers.
>
> Comp explains completely why feelings are NOT numbers. You don't study
> the theory, and you criticize only your own prejudice about numbers
> and machines.
>
> You can use non-comp, as you seem to desire, but then tell us what is
> not Turing emulable in "organic matter"?
>
> Bruno
>
Craig, Bruno has a point there. Be sure that you are not arguing against
a straw man unintesionally!
Onward!
Stephen
Unless you believe in zombie, the point is that there *is* enough
phenomenological qualia and subjectivity, and contingencies, in the
realm of numbers. The diffrent 1-views (the phenomenology of mind, of
matter, etc.) are given by the modal variant of self-reference. This
has been done and this does explain the shape of modern physics (where
physicists are lost in a labyrinth of incompatible interpretations).
Most of the quantum weirdness are theorem in arithmetic.
>
> 3) the qualitative principle is identical to privacy in an ontological
> sense of being self-sequestering from public exterior access. The
> privacy itself is what defines the locus of qualitative phenomena.
OK.
>
> 4) this 'stuff' may be ultimately originating through non-local, a-
> temporal axiom of the Singularity,
?
> so that we may not only have
> restricted access by virtue of our own separation from each other, but
> qualia itself may somehow present the experience of entities which we
> would consider to be in the future as well as the past.
Non sense with comp. We just cannot *assume* things like past and
future.
That is their error. You don't need to copy them.
> I'm only taking a hard line on this because I think that it's in such
> contradistinction to the momentum of civilized thought. A sufficiently
> evolved card game could be pretty damn impressive, and if we invest
> our own feeling into it, there is arguably new feelings that we
> experience as a result, I just don't think that what we see as the
> game can have feelings that we can realize. We can't rule out though
> that anything we experience as having no feeling has a private
> dimension that may see us as having no feeling. It just gets a bit too
> psychedelic (salviadelic?) to actually implement that level of animism
> practically, don't you think?
Only persons can think.
>
>> I am OK with this. But I do think plausible that you can emulate
>> digitally first hand experiences of pain and pleasure. Then 'real'
>> human-like pain, which can last for a time, will need the whole
>> (arithmetical) truth to be stable on its many 'futures'.
>
> I think you can emulate first and experiences only in a system capable
> of subjectively experiencing them.
That is tautological. I agree of course. But the question is about the
nature of that system. You seem to want it described by physics. This
is logically OK, but you have to abandon comp. That's all.
> We certainly should be able emulate
> the output of some kind of pain or pleasure and input it into another
> nervous system. Simple record and playback through an analog or
> digital medium. That's really one of my earliest and strongest dreams
> would be to be involved in the orchestration of full sensory
> experiences, brain-direct.
>
>> Our first
>> person experiences are non computably distributed on an infinite
>> structure, but that is a consequence of its digitalness at some
>> level.
>
> There was a lot made of the perceived difference in digital music when
> CDs first came out, in the audiophile communities particularly. I do
> think that a subtle difference can be detected but hard to know
> whether it's the digital nature itself or the processing, mixing,
> playback equipment, confirmation bias, etc. Digital music seems
> harsher, more sibilant and shallow on the percussion. It doesn't
> bother me much, but I think there could be a legitimate, if subtle
> difference stemming from the pure conversion of analog waveforms to
> digital samples.
I am not convinced by argument of impossibility pointing on actual
technology.
Bruno
> Hi Bruno and Craig,
>
> On 7/22/2011 4:58 AM, Bruno Marchal wrote:
>>
>> On 21 Jul 2011, at 16:08, Craig Weinberg wrote:
>>
>>>> if you think molecules are needed, that is, that the level of
>>>> substitution includes molecular activity, this too can be
>>>> emulated by
>>>> a computer
>>>
>>> But it can only be emulated in a virtual environment interfacing
>>> with
>>> a computer literate human being though.
>>
>> Why. That's begging the question.
>>
>>
>
> Bruno has a strong point here. So long as one is dealing with a
> system that can be described such that that description can be
> turned into a recipe to represent all aspects of the system, then it
> is, by definition computable!
>>
>>> A real mouse will not be able
>>> to live on virtual cheese.
>>
>> But a virtual mouse will (I will talk *in* the comp theory).
>
> Virtual mice eat virtual cheese and get virtual calories from it!
And you can prove that virtual mice exists in arithmetic.
> Be careful that your not forcing a multi-leveled concept into a
> single conceptual level.
?
>>
>>
>>
>>> Why can't consciousness be considered
>>> exactly the same way, as an irreducible correlate of specific meta-
>>> meta-meta-elaborations of matter?
>>
>> What do you mean by matter? Primitive matter does not exist. A TOE
>> has to explain where the belief in matter comes from without
>> assuming it.
>>
>>
> OK, Bruno, would you stop saying that unless you explicitly explain
> what you mean by "primitive matter"?
The object of the ontological commitment of materialist or naturalist
or physicalist.
It is not assumed in comp, but its appearance is explained by the
competition amoong infiniie of universal numbers "acting" below the
substitution level (that is a consequence of already just UDA1-7).
> The point that "A TOE has to explain where the belief in matter
> comes from without assuming it" is very important, though, but you
> might agree that that kind of multi-leveled TOE is foreign to most
> people. Not many people consider that a Theory of Everything must
> contain not only a representation of waht is observed but also the
> means and methods of the observations there of, or else it is not a
> theory of *Everything*.
OK.
> This actually makes the concept of a TOE subject to Incompleteness
> considerations!
Assuming comp, OK.
>
>>
>>>
>>>> All what consciousness (and matter) needs is a sufficiently rich
>>>> collection of self-referential relations. It happens that the
>>>> numbers,
>>>> by the simple laws of addition and multiplication provides already
>>>> just that. Adding some ontological elements can only make the mind
>>>> body problem more complex to even just formulate.
>>>
>>> Information is not consciousness. Energy is the experience of being
>>> informed and informing, but it is not information.
>>
>> I agree.
>>
>>
> Indeed!
>
>>
>>
>>> This is why a brain
>>> must be alive and conscious (not in a coma) to be informed or
>>> inform,
>>> and why a computer must be turned on to execute programs, or a
>>> mechanical computing system has to have kinetic initialization, etc.
>>
>> Not at all. All you need are relative genuine relations. That does
>> explain both the origin of quanta and qualia, including the
>> difference of the quantitative and the qualitative.
>>
>
> But Bruno, you are side-stepping the vital question of persistance
> and transitivity in that notion of "genuine relations." One's TOE
> has to account for the appearance of time, even it it is not
> primitive.
That has been done for subjectime. It is a construct in the S4Grz1
modality, or the X1* modality. Is there a physical time? That is a
comp open problem. (as it is with most physicalist theory too).
> It is not enough to show that matter is not primitive, we have to
> show how the image of an evolving matter universe is possible.
The possibility is provides by the internal arithmetical hypostases.
> So far we are implying it via diamonds, but diamonds do not map in
> ways that are necessary to code interactions.
Not yet. If you can prove it cannot, then comp is refuted.
>
>>
>>> The path that energy takes determines the content of the
>>> experience to
>>> some extent, but it is the physical nature of the materials through
>>> which the continuous sense of interaction occurs which determine the
>>> quality or magnitude of possible qualitative elaboration (physical,
>>> chemo, bio, zoo-physio, neuro, cerebral) of that experience.
>>
>>
>> How?
>>
>>
> Umm, Craig, no. Energy is defined by the path of events of the
> interaction. This is why the word "action" is used. We have a notion
> of least action which relates to the minimum configuration of a
> system, the content of the experience *is* the "inside view" of the
> process that strives always for that minimum.
Careful. You reintroduce some physics here.
>
>>
>>
>>> Physical
>>> will take you to detection, chemo to sense, bio to feeling, zoo to
>>> emotion, neuro to cognition, cerebral to full abstraction
>>> (colloquial
>>> terms here, not asserting a formal taxonomy).
>>
>> You say so, but my point is that if you assume matter, your theory
>> needs very special form of infinities. Which one?
>>
>>
> Could explain this necessity, Bruno?
I recall that by the UD argument comp implies that matter does not
exist. So here I was giving the contrapositive. You can reintroduce
matter by negating comp. But such a matter need you, and your body,
being non Turing emeulable (if not comp is again assumed). That is why
a non comp theory of matter has to introduce special non Turing
emulable infinities (nor the first person infinities that we can
already justify by comp: they are also non Turing emulable, a priori).
>
>>
>>
>>> All are forms of
>>> awareness. Consciousness implies awareness of awareness
>>
>> That is self-consciousness.
>
> Consciousness does not require a model of self that is integrated
> into the content of consciousness, therefore consciousness is not
> reflexive in the primitive sense.
OK.
>
>>
>>
>>
>>> which maybe
>>> comes at the neuro or cerebral level, maybe lower? It has nothing
>>> to
>>> do with the complexity of the path that the energy takes. Complexity
>>> is an experience, not a discrete ontological condition.
>>
>> You need infinities to make complexity an experience, and that is
>> like putting the horse behind the car.
>>
>>
> Please explain this.
It is an allusion yo the same infinities as above. You need them to
have a notion of experience in any non-comp context, a fortiori for
the experience of complexity.
>
>>
>>>
>>>> Adding some ontological elements can only make the mind
>>>> body problem more complex to even just formulate.
>>>
>>> This makes me think that you are sentimental about protecting the
>>> simplicity of an abstract formula, rather than faithfully
>>> representing
>>> the problem.
>>
>> I was mentioning the mind-body problem. No formula was involved.
>> You put infinities and uncomputability everywhere, where comp put
>> it in very special place with complete justification.
>>
>>
>>
>>> I'm not especially interested in the 'easy' problem of
>>> consciousness.
>>
>> Me neither.
>>
>>
>>
>>> It's a worthwhile problem, to be sure, it's just not my
>>> thing. I do think, however, that if we can accurately describe the
>>> pattern of what the hard problem seems to arise from, it may have
>>> implications for both the easy and hard problems. At worst, my view
>>> limits the aspirations of inorganic materials to simulate
>>> consciousness,
>>
>> That is vitalism. It fails to explain anything. It makes the
>> problem less tractable. It is similar to the God of the gap. Comp
>> explains why there is a gap. I am not sure you study the theory.
>>
>>
> OTOH, Bruno. one cannot gloss over the way that quantum logic is non-
> distributive. Reducing all to combinators or numbers that do not
> involve this seems doomed from the start.
On the contrary, we have to explain the non distributivity from the
physics (observable) extracted from comp. But this has been done. The
quantum logic extracted from comp are non distributive (very plausibly).
> it is as if we dissolve everything into a soup and say: See,
> Existence is soup!
? (lol)
Bruno
>>> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
>>> .
>>>
>>
>> http://iridia.ulb.ac.be/~marchal/
>>
>>
>>
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
But what is a world? Also, assuming computationalism, you need only to believe that you interact with a "world/reality", whatever that is, like in dream. If not you *do* introduce some magic in both consciousness and world.
So I need to believe some magic or I have to introduce some magic. That seems a distinction without a difference.
With comp the only magic is 0, 1, 2, 3, + addition + multiplication.
I think your theory is incoherent. If the neurons can "talk to each
other" thru the "pegs" then all the neurons except the afferent neurons
of perception and the efferent neurons of action could be replaced and
the person would *behave* exactly the same, including reporting that
they felt the same. They would be a philosophical zombie. They would
not *exhibit* dementia, catatonia, or any other symptom.
Brent
But that's contradicting your assumption that the "pegs" are transparent
to the neural communication:
"If the living
cells are able to talk to each other well through the prosthetic
network, then functionality should be retained"
Whatever neurons remain, even it it's only the afferent/efferent
ones, they get exactly the same communication as if there were no "pegs"
and the whole brain was neurons.
> If it were possible to spare certain areas or categories of
> neurons then I would expect more of a fragmented subject whose means
> of expression are intact, but who may not know what they are about to
> express. A partial zombie, being fed meaningless instructions but
> carrying them out consciously, if involuntarily. Of course, there may
> be all kinds of semantic dependencies which would render someone
> comatose before it ever got that far. If i remove all vowels from my
> writing there is a certain effect. If i remove all of the verbs there
> is another, if i switch to 50% chinese it's different from going 50%
> binary, etc. You would have to experiment to find out but i think the
> success would hinge as much on reraining organic composition as
> reproducing logical characteristics.
>
You're evading the point by changing examples.
It does raise in my mind an interesting pont though. These questions
are usually considered in terms of replacing some part of the brain (a
neuron, or a set of neurons) by an artificial device that implements the
same input/output function. It then seems, absent some intellect
vitale, that the behavior of that brain/person would be unchanged. But
wouldn't it be likely that the person would suffer some slight
impairment in learning/memory simply because the artificial device
always computes the same function, whereas the biological neurons grow
and change in response to stimuli. And those stimuli are external and
cannot be forseen by the doctor. So what he needs to implant is not
just a fixed function but a function that depends on the history of its
inputs (i.e. a function with memory).
Brent
On 7/22/2011 2:11 AM, Bruno Marchal wrote:But what is a world? Also, assuming computationalism, you need only to believe that you interact with a "world/reality", whatever that is, like in dream. If not you *do* introduce some magic in both consciousness and world.
So I need to believe some magic or I have to introduce some magic. That seems a distinction without a difference.
With comp the only magic is 0, 1, 2, 3, + addition + multiplication.
But is that the *only* magic. It seems to me that your argument includes the magic of the UD. If I understand it, it says that if a UD is running it executes all possible programs. Among those programs are ones that are simulations of Everett's multiverse, such as we may inhabit, including the simulations of ourselves. Consciousness is some part of the information processing in those simulations of us; where the the same conscious state is realized in many different programs and so has many different continuations and predecessors.
But all this is a hypothetical depending on a UD.
And aside from the problem that prima facie it will produce more chaotic non-lawlike experiences than law-like ones, there is no reason to suppose a UD exists. This explanation of the world is very much like Boltzmann's brain. It generates "everything" and then tries to pick out "this".
Comp embraces the non computable. If you study the work you will
understand that both matter and mind arise from the non computable,
with comp.
>
>>> Comp explains completely why feelings are NOT numbers. You don't
>>> study
>>> the theory, and you criticize only your own prejudice about numbers
>>> and machines.
>>
>>> You can use non-comp, as you seem to desire, but then tell us what
>>> is
>>> not Turing emulable in "organic matter"?
>>
>>> Bruno
>>
>> Craig, Bruno has a point there. Be sure that you are not arguing
>> against
>> a straw man unintesionally!
>
> Yeah, I would need to know how comp explains feelings exactly.
See the second part of sane04. Ask question if there are problems.
> I'm
> just going by my observation that numbers are in many ways everything
> that feeling is not. To get to the feeling of numbers, you have to
> look at something like numerology.
I doubt that very much. Lol.
All you need is computer science. Actually all you need is addition
and multiplication (and working a little bit, well, a lot probably).
Bruno
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
That would just mean that the neuronal level is too much high for
being the substitution level. Better to chose the DNA and metabolic
level.
Bruno
On Jul 22, 6:25 pm, meekerdb <meeke...@verizon.net> wrote:But that's contradicting your assumption that the "pegs" are transparent to the neural communication: "If the living cells are able to talk to each other well through the prosthetic network, then functionality should be retained"Neurological functionality is retained but there are fewer and fewer actual neurons to comprise the network, so the content of the conversations are degraded, even though that degradation is preserved with high fidelity.
Whatever neurons remain, even it it's only the afferent/efferent ones, they get exactly the same communication as if there were no "pegs" and the whole brain was neurons.Think of them like sock puppet/bots multiplying in a closed social network. If you have 100 actual friends on a social network and their accounts are progressively replaced by emulated accounts posting even slightly unconvincing status updates, you rapidly lose interest in those updates and either route around them, focusing on the diminishing group of your original non-bots, or check out of the network altogether. A neuron is more than it's communication.
Where does the badness come from? The afferent neurons?
>
>>> ...A neuron is more than it's communication.
>>>
>> Not to the next neuron it isn't...and not to the efferent neurons. If
>> there is something that isn't communicated, it can't make a difference
>> to behavior because we know that muscles are moved by what the neurons
>> communicate to them.
>>
> Muscles aren't moved by neurons, muscles move themselves in sympathy
> with neuronal motivation. Behavior isn't everything, especially a
> third person observation of a behavior on an entirely different scale
> of physical activity.
>
But that's the crux of the argument. If behavior isn't everything then,
according to you, a person whose brain has been replaced by artificial,
but functionally identical elements, could be a philosophical zombie.
One who's every behavior is exactly like a person with a biological
brain - including reporting the same feelings. Yet that is contrary to
your assertion that they would exhibit dementia.
>
>> Or as Bruno suggests, just model it at a lower level. Of course if you
>> have to model it at the quark level, you might as well make your
>> artificial neuron out of quarks and it won't be all that "artificial".
>>
> Exactly what I've been saying. If you model only the superficial
> behaviors, you can't expect the meaningful roots of those behaviors to
> appear spontaneously.
>
>
No you've been saying more than that. You've been saying that even if
the artificial elements emulate the biological ones at a very low level
they won't work unless they *are* biological. When I said that if you
have to model at the quark level you might as well make up "real"
neurons that was a recommendation of efficiency. According to Bruno,
and functionalist theory, it might be very inefficient to emulate the
quarks with a Turing machine but it is in principle equally effective.
Brent
"Forensically"?? Do we need a Weinberg-English dictionary?
Brent
> On Jul 22, 7:26 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
>
>> Comp embraces the non computable. If you study the work you will
>> understand that both matter and mind arise from the non computable,
>> with comp.
>
>> See the second part of sane04. Ask question if there are problems.
>
> I know you must have gone over it too many times already in other
> places, so I'm not expecting you to reiterate comp for me, but I
> haven't been able to see how comp embraces the non computable.
It embraces it at many places. First the first person indeterminacy
leads to the taking into account of uncomputable sequences in the
first person experiences. Just iterate the Washington-Moscow
experience n times. There will be 2^n resulting version of you, and
most will acknowledge the apparent non computability of their history
(like WWMMWWWWMWMMWMMMMWWW ...).
Secondly, at the modal first order level, none of the hypostases are
decidable. provable Bp is PI-2 complete, and true Bp is P1-complete in
the oracle of truth. This means "terribly non computable".
The theory of computability is full of result showing that the
behavior of machines is terribly NOT computable, and the machine's
theology is full of highly undecidable sentences. This should kill any
reductionist view of what numbers are capable of.
> To me, any time you say that comp explains something or direct me to
> your
> work, it's the same as someone saying 'The Bible explains that'.
I have worked a lot to make all this available to any good willing
people. The first six step of UDA in the sane04 people can be
understood without reading any textbook. Step seven needs familiarity
with the Church-Turing thesis, or with a bit of computer programming.
The AUDA "interview of the UM" needs some familiarity with Gödel's
1931 paper.
It should be obvious that computationalism needs of a bit of computer
science.
> Not
> trying to disparage your way of teaching or motivating, just saying
> that I can't seem to do anything with it.
You can remember the result, which is going in *you* direction (at
least UDA). We cannot have both comp and materialism. You keep
materialism, so you are coherent in abandoning comp. Unfortunately the
result is non intelligible, because you don't say explicitly what is
non Turing emulable in the human body.
> To me, if it can't be made
> understandable within the context of the discussion at hand, it's
> better left to another discussion.
Just tell us what you don't understand.
>
>>> I'm
>>> just going by my observation that numbers are in many ways
>>> everything
>>> that feeling is not. To get to the feeling of numbers, you have to
>>> look at something like numerology.
>>
>> I doubt that very much. Lol.
>> All you need is computer science. Actually all you need is addition
>> and multiplication (and working a little bit, well, a lot probably).
>
> What are your doubts based upon?
Numerology is poetry. It has nothing to tell on the consequences of
comp. To refer to numerology in that setting is like to ask an
astrologist for sending a rocket in space.
Bruno
A sculpture (non moving, dead)? Or a zombie? (behavior is preserved)
In both case it makes DNA magical, infinite or non Turing emulable. It
makes also the theory of evolution doubtful, because it means that
nature has to take into account infinite information to select the
organisms. Biological evidences points on the contrary that nature bet
on approximations, redundancy, and allow a big range of perturbation
of its elements. Our material constitution changes all the time, and
allow contingent variations which would be hard to manage in case all
the decimals of the physical parameters have to be taken into account.
Bruno
>> Unless you believe in zombie, the point is that there *is* enough
>> phenomenological qualia and subjectivity, and contingencies, in the
>> realm of numbers. The diffrent 1-views (the phenomenology of mind, of
>> matter, etc.) are given by the modal variant of self-reference. This
>> has been done and this does explain the shape of modern physics
>> (where
>> physicists are lost in a labyrinth of incompatible interpretations).
>> Most of the quantum weirdness are theorem in arithmetic.
>
> I believe in zombies as far as it would be possible to simulate a
> human presence with a YouTube flip book as I described, or a to
> simulate a human brain digitally which would be zombies as far as
> having any internal awareness beyond the semiconductor experience of
> permittivity/permeability/wattage, etc.
So they would behave like you and me, yet have consciousness of
permittivity/permeability/wattage?
>
>>> so that we may not only have
>>> restricted access by virtue of our own separation from each other,
>>> but
>>> qualia itself may somehow present the experience of entities which
>>> we
>>> would consider to be in the future as well as the past.
>>
>> Non sense with comp. We just cannot *assume* things like past and
>> future.
>
> I'm saying that we human beings consider them to be in the future and
> the past, not that there is a future or past.
I am not sure I understand what you mean.
>
>> That is their error. You don't need to copy them.
>
> You think that asserting a hypothesis that feeling is not quantifiable
> is the same thing as rationalizing genocide and slavery?
Not at all. Comp prevents feeling to be quantifiable. I am on your
side here.
> I think it's
> just the opposite. It's the belief in arithmetic over subjectivity
> that is leading the planet down the primrose path to asphyxiation and
> madness.
You are the one disallowing subjectivity to some entities, based on
their 'number skin'. You are the reductionist here, telling us that
only wet human brain can think.
On the contrary, mechanism, when well understood, is a vaccine against
reductionism, even against reductionism of robots and numbers.
>
>> Only persons can think.
>
> I thought the point of comp was that digital simulation is sufficient
> to simulate thought.
Thought/consciousness exists only in the arithmetical platonia.
Digital emulation makes them only relatively accessible to universal
machines, relatively to other universal machines. I know this is a bit
a subtle counter-intuitive point.
>
>> That is tautological. I agree of course. But the question is about
>> the
>> nature of that system. You seem to want it described by physics. This
>> is logically OK, but you have to abandon comp. That's all.
>
> If comp cannot embrace physics, and physics cannot embrace comp, then
> we have to turn to something which reconciles both.
Comp explains where the laws of physics come from, and this without
eliminating the person and souls.
>
>> I am not convinced by argument of impossibility pointing on actual
>> technology.
>
> Not sure what you mean.
You were arguing from the current and contingent shape of today's
technology.
Bruno
> I was thinking about how a sperm resembles a brain and spinal cord
> but that the egg is more like a microcosm of a world. Conception
> plays out metaphorically as a miniature sensorimotive self entering a
> single life as a sphere which progressively articulates itself as it
> absorbs not only the genetic information, but the informer as well.
You might be interested in the statement by Dick
http://groups.google.com/group/everything-list/msg/76da2f473b3e9f96
"IF microtubules in the brain have coherence properties that equate to
consciousness
GIVEN that those microtubules map in the sense of a fate map from the
cortex of the one cell (amphibian) embryo to the brain
THEN we ought to be able to investigate those coherence properties
(consciousness?) in the one cell embryo."
If you would like to learn about embryogenesis more, then:
http://embryogenesisexplained.com/
The course will start again in the Second Life in October.
Evgenii
Definitely. Inorganic mega-molecules can do amazing things. Enjoying a
steak dinner isn't one of them though.