On Jul 13, 2011, at 7:04 PM, Craig Weinberg <whats...@gmail.com>
wrote:
>> Again, all that matters is that the *outputs* that influence other
>> neurons are just like those of a real neuron, any *internal*
>> processes in the substitute are just supposed to be >artificial
>> simulations of what goes on in a real neuron, so there might be
>> simulated genes (in a simulation running on something like a
>> silicon chip or other future computing >technology) but there'd be
>> no need for actual DNA molecules inside the substitute.
>
> The assumption is that there is a meaningful difference between the
> processes physically within the cell and those that are input and
> output between the cells. That is not my view. Just as the glowing
> blue chair you are imagining now (is it a recliner? A futuristic
> cartoon?) is not physically present in any neuron or group of neurons
> in your skull -
If it is not present physically, then what causes a person to say "I
am imagining a blue chair"?
> under any imaging system or magnification. My idea of
> 'interior' is different from the physical inside of the cell body of a
> neuron. It is the interior topology. It's not even a place, it's just
> a sensorimotive
Could you please define this term? I looked it up but the
definitions I found did not seem to fit.
> awareness of itself and it's surroundings - hanging on
> to it's neighbors, reaching out to connect, expanding and contracting
> with the mood of the collective. This is what consciousness is. This
> is who we are. The closer you get to the exact nature of the neuron,
> the closer you get to human consciousness.
There is such a thing as too low a level. What leads you to believe
the neuron is the appropriate level to find qualia, rather than the
states of neuron groups or the whole brain? Taking the opposite
direction, why not say it must be explained in terms if chemistry or
quarks? What led you to conclude it is the neurons? Afterall, are
rat neurons very different from human neurons? Do rats have the same
range of qualia as we?
> If you insist upon using
> inorganic materials, that really limits the degree to which the
> feelings it can host will be similar.
Assuming qualia supervene on the individual cells or their chemistry.
> Why wouldn't you need DNA to
> feel like something based on DNA in practically every one of it's
> cells?
You would have to show that the presence of DNA in part determines the
evolution of the brains neural network. If not, it is as relevant to
you and your mind as the neutrinos passing through you.
>
>
>> The idea is just that *some* sufficiently detailed digital
>> simulation would behave just like real neurons and a real brain,
>> and "functionalism" as a philosophical view says that this
>> >simulation would have the same mental properties (such as qualia,
>> if the functionalist thinks of "qualia" as something more than just
>> a name for a certain type of physical process) >as the original brain
>
> A digital simulation is just a pattern in an abacus.
The state of an abacus is just a number, not a process. I think you
may not have a full understanding of the differences between a turing
machine and a string of bits. A Turing machine can mimick any process
that is defineable and does not take an infinite number of steps.
Turing machines are dynamic, self-directed entities. This
distinguishes them from cartoons, YouTube videos and the state if an
abacus.
Since they have such a universal capability to mimick processes, then
the idea that the brain is a process leads naturally to the idea of
intelligent computers which could function identically to organic
brains.
Then, if you deny the logical possibilitt of zombies, or fading
qualia, you must accept such an emulation of a human mind would be
equally conscious.
> If you've got a
> gigantic abacus and a helicopter, you can make something that looks
> like whatever you want it to look like from a distance, but it's still
> just an abacus. It has no subjectivity beyond the physical materials
> that make up the beads.
The idea behind a computer simulation of a mind is not to make
something that looks like a brain but to make something that behaves
and works like a brain.
>
>
>> Everything internal to the boundary of the neuron is simulated,
>> possibly using materials that have no resemblance to biological ones.
>
> It's a dynamic system,
So is a turing machine.
> there is no boundary like that. The
> neurotransmitters are produced by and received within the neurons
> themselves. If something produces and metabolizes biological
> molecules, then it is functioning at a biochemical level and not at
> the level of a digital electronic simulation. If you have a heat sink
> for your device it's electromotive. If you have an insulin pump it's
> biological, if you have a serotonin reuptake receptor, it's
> neurological.
>
>> So if you replace the inside of one volume with a very different
>> system that nevertheless emits the same pattern of particles at the
>> boundary of the volume, systems in other >adjacent volumes "don't
>> know the difference" and their behavior is unaffected.
>
> No, I don't that's how living things work. Remember that people's
> bodies often reject living tissue transplanted from other human
> beings.
Rejection requires the body knowing there is a difference, which is
against the starting assumption.
>
>
>> You didn't address my question about whether you agree or disagree
>> with physical reductionism in my last post, can you please do that
>> in your next response to me?
>
> I agree with physical reductionism as far as the physical side of
> things is concerned. Qualia is the opposite that would be subject to
> experiential irreductionism. Which is why you can print Shakespeare on
> a poster or a fortune cookie and it's still Shakeapeare, but you can't
> make enriched uranium out of corned beef or a human brain out of table
> salt.
>
>> Because I'm just talking about the behavioral aspects of
>> consciousness now, since it's not clear if you actually accept or
>> reject the premise that it would be possible to replace >neurons
>> with functional equivalents that would leave *behavior* unaffected
>
> I'm rejecting the premise that there is a such thing as a functional
> replacement for a neuron that is sufficiently different from a neuron
> that it would matter.
I pasted real life counter examples to this. Artificial cochlea and
retinas.
> You can make a prosthetic appliance which your
> nervous system will make do with, but it can't replace the nervous
> system altogether.
At what point does the replacement magically stop working?
> The nervous system predicts and guesses. It can
> route around damage or utilize a device which it can understand how to
> use.
So it can use an artificial retina but not an artificial neuron?
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>
> hl=en.
>
>If it is not present physically, then what causes a person to say "IA sensorimotive circuit. A sensory feeling which is a desire to
>am imagining a blue chair"?
fulfill itself through the motive impulse to communicate that
statement.
Nerves are referred to as afferent and efferent also. My idea is that
>Could you please define this term? I looked it up but the
>definitions I found did not seem to fit.
all nerve functionality is sense (input) and motive (output). I would
say motor, but it's confusing because something like changing your
mind or making a choice is motive but not physically expressed as
motor activity, but I think that they are the same thing. I am
generalizing what nerves do to the level of physics, so that our
nerves are doing the same thing that all matter is doing, just
hypertrophied to host more meta-elaborated sensorimotive phenomena.
I didn't say it was. I was just talking about the more similar you can
>There is such a thing as too low a level. What leads you to believe
>the neuron is the appropriate level to find qualia, rather than the
>states of neuron groups or the whole brain?
get to imitating a human neuron, the more similar a brain based on
that imitation will be to having the potential for human
consciousness.
Chromosome mutations cause mutations in the brain's neural network, do
>You would have to show that the presence of DNA in part determines the
>evolution of the brains neural network. If not, it is as relevant to
>you and your mind as the neutrinos passing through you.
they not?
btw, I don't interpret neutrinos, photons, or other massless
chargeless phenomena as literal particles. QM is a misinterpretation.
Accurate, but misinterpreted.
A pattern is not necessarily static, especially not an abacus, the
>> A digital simulation is just a pattern in an abacus.
>The state of an abacus is just a number, not a process. I think you
>may not have a full understanding of the differences between a turing
>machine and a string of bits. A Turing machine can mimick any process
>that is defineable and does not take an infinite number of steps.
>Turing machines are dynamic, self-directed entities. This
>distinguishes them from cartoons, YouTube videos and the state if an
>abacus.
purpose of which is to be able to change the positions to any number.
Just like a cartoon.
If you are defining Turing machines as self-
directed entities then you have already defined them as conscious, so
it's a fallacy to present it as a question.
Since I think that a
machine cannot have a self, but is instead the self's perception of
the self's opposite, I'm not compelled by any arguments which imagine
that purely quantitative phenomena (if there were such a thing) can be
made to feel.
These ideas are not applicable in my model of consciousness and it's
>Then, if you deny the logical possibilitt of zombies, or fading
>qualia, you must accept such an emulation of a human mind would be
>equally conscious.
relation to neurology.
I think that for it to work exactly like a brain it has to be a brain.
>The idea behind a computer simulation of a mind is not to make
>something that looks like a brain but to make something that behaves
>and works like a brain.
If you want something that behaves like an intelligent automaton, then
you can use a machine made of inorganic matter.
If you want something
that feels and behaves like a living organism
then you have to create
a living organism out of matter that can self replicate and die.
If you are already defining something as biologically identical, then
>Rejection requires the body knowing there is a difference, which is
>against the starting assumption.
you are effectively asking 'if something non-biological were
biological, would it perform biological functions?'
Those are not replacements for neurons,
>I pasted real life counter examples to this. Artificial cochlea and
>retinas.
they are prostheses for a
nervous system. Big difference.
I can replace a car engine with
horses, but I can't replace a horse's brain with a car engine.
At what point does cancer magically stop you from waking up?
>At what point does the replacement magically stop working?
>So it can use an artificial retina but not an artificial neuron?A neuron can use an artificial neuron but a person can't use an
artificial neuron except through a living neuron.
I don't want to talk about inner experience. I want to talk about my fundamental reordering of the cosmos, which if it were correct, would be staggeringly important and I have not seen anywhere else:
- Mind and body are not merely separate, but perpendicular topologies of the same ontological continuum of sense.
- The interior of electromagnetism is sensorimotive, the interior of determinism is free will, and the interior of general relativity is perception.
- Quantum Mechanics is a misinterpretation of atomic quorum sensing.
- Time, space, and gravity are void. Their effects are explained by perceptual relativity and sensorimotor electromagnetism.
- The "speed of light" c is not a speed it's a condition of nonlocality or absolute velocity, representing a third state of physical relation as the opposite of both stillness and motion.
It's not about meticulous logical deduction, it's about grasping the largest, broadest description of the cosmos possible which doesn't leave anything out. I just want to see if this map flies, and if not, why not?
No it is worst, I'm afraid. I hope you don't mind when I am being
frank. In fundamental matter, you have to explain things from scratch.
Nothing can be taken for granted, and you have to put your assumptions
on the table, so that we avoid oblique comments and vocabulary
dispersion.
You say yourself that you don't know if you talk literally or
figuratively. That's says it all, I think. You should make a choice,
and work from there. Personally, I am a literalist, that is I am
applying the scientific method. That is, for the mind-body problem,
actually the hard part for scientist, consists in understanding that
once we assume the comp hyp, we can translate "philosophical problems"
into "mathematical and/or physical problems".
Philosophers don't like that (especially continental one), but this
fits with their usual tradition of defending academic territories and
position (food). It is natural, like in (pseudo)-religion, they are
not very happy when people use the scientific methods to invade their
fields of study.
But this means that, in interdisciplinary research, you must be able
to be understood by a majority in each field you are crossing. Even
when you are successful on this, you will have to find the people
having the courage to study the connection between the domains.
A lot of scientists still believe that notion like mind,
consciousness, are crackpot notion, and when sincere people try to
discuss on those notions, you can be amazed by the tons of
difficulties. I have nothing against some attempts toward a
materialist solution of the MB P., and in that case at least we know
(or should know, or refute ...) that we have to abandon even
extremally weak version of mechanism. But then, this looks like
introducing special (and unknown) infinities in the MB puzzle, so I am
not interested, without providing some key motivation.
In this list people are open minded for the "everything exists" type
of theories, like Everett Many-Worlds, with an open mind on
computationalism (Schmidhuber) and mathematicalism or immaterialism
(Tegmark). So my own contribution was well suited, given that I
propose an argument showing that if we believe that we can survive
with a digitalizable body, then we dispose, ONLY, of a, yet, very
solid constructive, and highly complex structured, version of an
"everything": all computations, (in the precise arithmetical sense of
sigma_1 arithmetical relations, and their (coded) proofs. I show also
that we dispose of a very natural notion of observers, the universal
machines, and that among them we can already "interview" those which
can prove, know, guess, feel about their internal views on realities.
Everett's move to embed the physicist subject *in* the object matter
of the physical equation (SWE) extends itself in the arithmetical
realm, with the embedding of the mathematician *in* arithmetic, once
we take the possibility of our local digitalization seriously enough
into consideration.
This shows mainly that, with comp, the mind-body problem is two times
more complex than what people usually think. Not only we have to
explain qualia/consciousness from the number, but we have to explain
quanta/matter from the numbers too.
But universal machine have a natural theory of thought (the laws of
Boole), and a natural theory of mind (the Gödel-Löb-Solovay logics of
self-reference), and by the very existence of computer science, in
fine, you get a translation of the body problem in computer science,
which makes it automatically a problem in number theory.
Bruno
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>
> It's not about whether other cells would sense the imposter neuron,
> it's
> about how much of an imposter the neuron is. If acts like a real cell
> in
> every physical way, if another organism can kill it and eat it
> and
> metabolize it completely then you pretty much have a cell. Whatever
> cannot
> be metabolized in that way is what potentially detracts from the
> ability to
> sustain consciousness. It's not your cells that need to sense DNA,
> it's the
> question of whether a brain composed entirely of, or significantly of
> cells
> lacking DNA would be conscious in the same way as a
> person.
DNA doesn't play a direct role in neuronal to neuronal interaction. It
is necessary for the synthesis of proteins, so without it the neuron
would be unable to, for example, produce more surface receptors or the
essential proteins needed for cell survival; however, if the DNA were
destroyed the neuron would carry on functioning as per usual for at
least a few minutes. Now, you speculate that consciousness may somehow
reside in the components of the neuron and not just in its function,
so that perhaps if the DNA were destroyed the consciousness would be
affected - let's say for the sake of simplicity that it too would be
destroyed - even in the period the neuron was functioning normally. If
that is so, then if all the neurons in your visual cortex were
stripped of their DNA you would be blind: your visual qualia would
disappear. But if all the neurons in your visual cortex continued to
function normally, they would send the normal signals to the rest of
your brain and the rest of your brain would behave as if you could
see: that is, you would accurately describe objects put in front of
your eyes and honestly believe that you had normal vision. So how
would this state, behaving as if you had normal vision and believing
you had normal vision, differ from actually having normal vision; or
to put it differently, how do you know that you aren't blind and
merely deluded about being able to see?
--
Stathis Papaioannou
I think there could be differences in how vision is perceived if all
of the visual cortex lacked DNA, even if the neurons of the cortex
exhibited superficial evidence of normal connectivity. A person could
be dissociated from the images they see, feeling them to be
meaningless or unreal, seen as if in third person or from malicious
phantom/alien eyeballs. Maybe it would be more subtle...a sensation of
otherhanded sight, or sight seeming to originate from a place behind
the ears rather than above the nose. The non-DNA vision could be
completely inaccessible to the conscious mind, a psychosomatic/
hysterical blindness, or perhaps the qualia would be different,
unburdened by DNA, colors could seem lighter, more saturated like a
dream. The possibilities are endless. The only way to find out is to
do experiments.
DNA may not play a direct role in neuronal to neuronal interaction,
but the same could be said of perception itself. We have nothing to
show that perception is the necessary result of neuronal interaction.
The same interactions could exist in a simulation without any kind of
perceived universe being created somewhere. Just because the behavior
of neurons correlates with perception doesn't mean that their behavior
alone causes perception. Materials matter. A TV set made out of
hamburger won't work.
What I'm trying to say is that the sensorimotive experience of matter
is not limited to the physical interior of each component of a cell or
molecule, but rather it is a completely other, synergistic topology
which is as diffuse and experiential as the component side is discrete
and observable. There is a functional correlation, but that's just
where the two topologies intersect. Many minor physical changes to the
brain can occur without any noticeable differences in perception -
sometimes major changes, injuries, etc. Major changes in the psyche
can occur without any physical precipitate - reading a book may
unleash a flood of neurotransmitters but the cause is semantic, not
biochemical.
The requirement is that the artificial neurons interact with the
biological neurons in the normal way, so that the biological neurons
can't tell that they are imposters. This is a less stringent
requirement than making artificial neurons that are indistinguishable
from biological neurons under any test whatsoever. In the example I
gave before, removing the DNA from a neuron would at least for a few
minutes continue behaving normally so the surrounding neurons would
not detect that anything had changed, whereas an electron micrograph
might easily show the difference.
-- Stathis Papaioannou
No, but it means the chicken head isn't necessary to walking - just like
DNA isn't necessary to consciousness.
Brent
Interaction with the world. Information processing. Memory. A point
of view; i.e. model of the world including self. Purpose/values.
Brent
I think you have failed to address the point made by several people so
far, which is that if the replacement neurons can interact with the
remaining biological neurons in a normal way, then it is not possible
for there to be a change in consciousness. The important thing is
**behaviour of the replacement neurons from the point of view of the
biological neurons**.
--
Stathis Papaioannou
Better than magic topology.
Brent