RE: bruno list

47 views
Skip to first unread message

Jesse Mazer

unread,
Jul 13, 2011, 1:23:30 PM7/13/11
to everyth...@googlegroups.com

Craig Weinberg wrote:

>It's weird, I get an error when I try to reply in any way to your last post. Here's what I'm trying to Reply:

>The crux of the whole issue is what we mean by functionally indistinguishable.

But I specified what I meant (and what I presume Chalmers meant)--that any physical influences such as neurotransmitters that other neurons respond to (in terms of the timing of their own electrochemical pulses, and the growth and death of their synapses) are still emitted by the substitute, so that the other neurons "can't tell the difference" and their behavior is unchanged from what it would be if the neuron hadn't been replaced by an artificial substitute.

>If you aren't talking about silicon chips or digital simulation, then you are talking about a different level of function. Would your artificial >neuron synthesize neurotransmitters, detect and respond to neurotransmitters, even emulate genetics?

I said that it would emit neurotransmitters--whether it synthesized them internally or had a supply that was periodically replenished by nanobots or something is irrelevant. Again, all that matters is that the *outputs* that influence other neurons are just like those of a real neuron, any *internal* processes in the substitute are just supposed to be artificial simulations of what goes on in a real neuron, so there might be simulated genes (in a simulation running on something like a silicon chip or other future computing technology) but there'd be no need for actual DNA molecules inside the substitute.

>If you get down to the level of the pseudobiological, then the odds of being able to replace neurons successfully gets much higher to >me. To me, that's not what functionalism is about though. I think of functionalism as confidence in a more superficial neural network >simulation of logical nodes. Virtual consciousness. 

I don't think functionalism means confidence that the extremely simplified "nodes" of most modern neural networks would be sufficient for a simulated brain that behaved just like a real one, it might well be that much more detailed simulations of individual neurons would be needed for mind uploading. The idea is just that *some* sufficiently detailed digital simulation would behave just like real neurons and a real brain, and "functionalism" as a philosophical view says that this simulation would have the same mental properties (such as qualia, if the functionalist thinks of "qualia" as something more than just a name for a certain type of physical process) as the original brain (see the first sentence defining 'functionalism' at http://plato.stanford.edu/entries/functionalism/ )

>If you're going to get down to the biological substitution level of emulating the tissue itself so that the tissue is biologically >indistinguishable from brain tissue, but maybe has some plastic or whatever instead of cytoplasm, then sure, that might work. As long >as you've got real DNA, real ions, real sensitivity to real neurotransmitters, then yeah that could work.

No, that's not what I'm talking about. Everything internal to the boundary of the neuron is simulated, possibly using materials that have no resemblance to biological ones. But all the relevant molecules and electromagnetic waves which leave the boundary (and which are relevant to the behavior of other neurons, so for example visible light waves probably don't need to be included) of the original neuron are still emitted by the artificial substitute, like neurotransmitters. 

As I said, a reductionist should believe that the behavior of a complex system is in principle explainable as nothing more than the sum of all the interactions of its parts. And if the reductionist grants that at the scale of neurons, entanglement isn't relevant to how they interact (because of decoherence), then we should be able to assume that the behavior of the system is a sum of *local* interactions between particles that are close to one another in space. So if we divide a large system into a bunch of small volumes, the only way processes happening within one volume can have any causal influence on processes happening within a second adjacent volume is via local interactions that happen at the *boundary* between the two volumes, or particles passing through this boundary which later interact with others inside the second volume. So if you replace the inside of one volume with a very different system that nevertheless emits the same pattern of particles at the boundary of the volume, systems in other adjacent volumes "don't know the difference" and their behavior is unaffected. You didn't address my question about whether you agree or disagree with physical reductionism in my last post, can you please do that in your next response to me?

>>You can simulate the large-scale behavior of water using only the basic quantum laws that govern interactions between the charged >>particles that make up the atoms in each water molecule-

>Simulating the behavior of water isn't the same thing as being able to create synthetic water. If you are starving, watching a movie >that explains a roast beef sandwich doesn't help you. Why would consciousness be any different?

Because I'm just talking about the behavioral aspects of consciousness now, since it's not clear if you actually accept or reject the premise that it would be possible to replace neurons with functional equivalents that would leave *behavior* unaffected (both the behavior of other nearby neurons, and behavior of the whole person in the form of muscle movement triggered by neural signals, including speech about what the person was feeling). If you do accept that premise, then we can move on to Chalmers' argument about the implausibility of dancing/fading qualia in situations where behavior is completely unaffected--you also have not really given a clear answer to the question of whether you think there could be situations where behavior is completely unaffected but qualia are changing or fading. But one thing at a time, first I want to focus on this issue of whether you accept that in principle it would be possible to replace neurons with "functional equivalents" which emit the same signals to other neurons but have a totally different internal structure, and whether you accept that this would leave behavior unchanged, both for nearby neurons and the muscle movements of the body as a whole.

>If you replaced the log in your fireplace with a fluorescent tube, it's not going to be the functional equivalent of fire if you are freezing >in the winter. The problem with consciousness is that we don't know which functions, if any, make the difference between the >possibility of consciousness or not. I see our human consciousness as an elaboration of animal experience, so that anything that can >emulate human consciousness must be able to feel like an animal, which means feeling like you are made of meat that wants to eat, >fuck, kill, run, sleep, and avoid pain.

Again, not talking about consciousness at the moment, just behaviors that we associate with consciousness. That's why, in answer to your question about synthetic water, I imagined a robot whose limb movements depend on the motions of water in an internal tank, and pointed out that if you replaced the tank with a sufficiently good simulation, the external limb movements of the robot shouldn't be any different. 

>>I don't see why that follows, we don't see darwinian evolution in non-organic systems either but that doesn't prove that darwinian >>evolution somehow requires something more than just a physical system with the right type of organization (basically a system that >>can self-replicate, and which has the right sort of stable structure to preserve hereditary information to a high degree but also with >>enough instability for "mutations" in this information from one generation to the next)

>If we can make an inorganic material that can self-replicate, mutate, and die, then it stands more of a chance to be able to develop >it's detection into something like sensation then feeling, thinking, morality, etc. There must be some reason why it doesn't happen >naturally after 4 billion years here, so I suspect that reinventing it won't be worth the trouble. Why not just use organic molecules >instead?

I don't really want to get into the general question of the advantages and disadvantages of trying to have darwinian evolution in non-organic systems, I was just addressing your specific claim that if consciousness is just a matter of organization we should expect to see it already in non-organic systems. My point was that if you agree that the basic notion of "Darwinian evolution" is purely a matter of organization and not the details of what a system is made of (Do you in fact agree with that? Regardless of whether it might be *easier* to implement Darwinian evolution in an organic system, hopefully you wouldn't say it's in-principle impossible to implement self-replication with heredity and mutation in a non-organic system?), then it's clear that in general it cannot be true that "Feature X which we see in organic systems is purely a matter of organization" implies "We should expect to see natural examples of Feature X in non-organic systems as well". 

Jesse


On Jul 12, 8:36 pm, Jesse Mazer <laserma...@hotmail.com> wrote:
> > Date: Tue, 12 Jul 2011 15:50:12 -0700
> > Subject: Re: Bruno's blasphemy.
> > From: whatsons...@gmail.com
> > To: everyth...@googlegroups.com
>
> > Thanks, I always seem to like Chalmers perspectives. In this case I
> > think that the hypothesis of physics I'm working from changes how I
> > see this argument compared to how I would have a couple years ago. My
> > thought now is that although organizational invariance is valid,
> > molecular structure is part of the organization. I think that
> > consciousness is not so much a phenomenon that is produced, but an
> > essential property that is accessed in different ways through
> > different organizations.
>
> But how does this address the thought-experiment? If each neuron were indeed replaced one by one by a functionally indistinguishable substitute, do you think the qualia would change somehow without the person's behavior changing in any way, so they still maintained that they noticed no differences?
>
>
>
> > I'll just throw out some thoughts:
>
> > If you take an MRI of a silicon brain, it's going to look nothing like
> > a human brain. If an MRI can tell the difference, why can't the brain
> > itself?
>
> Because neurons (including those controlling muscles) don't see each other visually, they only "sense" one another by certain information channels such as neurotransmitter molecules which go from one neuron to another at the synaptic gap. So if the artificial substitutes gave all the same type of outputs that other neurons could sense, like sending neurotransmitter molecules to other neurons (and perhaps other influences like creating electromagnetic fields which would affect action potentials traveling along nearby neurons), then the system as a whole should behave identically in terms of neural outputs to muscles (including speech acts reporting inner sensations of color and whether or not the qualia are "dancing" or remaining constant), even if some other system that can sense information about neurons that neurons themselves cannot (like a brain scan which can show something about the material or even shape of neurons) could tell the difference.
>
> > Can you make synthetic water? Why not?
>
> You can simulate the large-scale behavior of water using only the basic quantum laws that govern interactions between the charged particles that make up the atoms in each water molecule--seehttp://www.udel.edu/PR/UDaily/2007/mar/water030207.htmlfor a discussion. If you had a robot whose external behavior was somehow determined by the behavior of water in an internal hidden tank (say it had some scanners watching the motion of water in that tank, and the scanners would send signals to the robotic limbs based on what they saw), then the external behavior of the robot should be unchanged if you replaced the actual water tank with a sufficiently detailed simulation of a water tank of that size.
>
> > If consciousness is purely organizational, shouldn't we see an example
> > of non-living consciousness in nature? (Maybe we do but why don't we
> > recognize it as such). At least we should see an example of an
> > inorganic organism.
>
> I don't see why that follows, we don't see darwinian evolution in non-organic systems either but that doesn't prove that darwinian evolution somehow requires something more than just a physical system with the right type of organization (basically a system that can self-replicate, and which has the right sort of stable structure to preserve hereditary information to a high degree but also with enough instability for "mutations" in this information from one generation to the next). In fact I think most scientists would agree that intelligent purposeful and flexible behavior must have something to do with darwinian or quasi-darwinian processes in the brain (quasi-darwinian to cover something like the way an ant colony selects the best paths to food, which does involve throwing up a lot of variants and then creating new variants closer to successful ones, but doesn't really involve anything directly analogous to "genes" or self-replication of scent trails). That said, since I am philosophically inclined towards monism I do lean towards the idea that perhaps all physical processes might be associated with some very "basic" form of qualia, even if the sort of complex, differentiated and meaningful qualia we experience are only possible in adaptive systems like the brain (chalmers discusses this sort of panpsychist idea in his book "The Conscious Mind", and there's also a discussion of "naturalistic panpsychism" athttp://www.hedweb.com/lockwood.htm#naturalistic)
>
>
>
> > My view of awareness is now subtractive and holographic (think pinhole
> > camera), so that I would read fading qualia in a different way. More
> > like dementia.. attenuating connectivity between different aspects of
> > the self, not changing qualia necessarily. The brain might respond to
> > the implanted chips, even ruling out organic rejection, the native
> > neurology may strengthen it's remaining connections and attempt to
> > compensate for the implants with neuroplasticity, routing around the
> > 'damage'.
>
> But here you seem to be rejecting the basic premise of Chalmers' thought experiment, which supposes that one could replace neurons with *functionally* indistinguishable substitutes, so that the externally-observable behavior of other nearby neurons would be no different from what it would be if the neurons hadn't been replaced. If you accept physical reductionism--the idea that the external behavior (as opposed to inner qualia) of any physical system is in principle always reducible to the interactions of all its basic components such as subatomic particles, interacting according to the same universal laws (like how the behavior of a collection of water molecules can be reduced to the interaction of all the individual charged particles obeying basic quantum laws)--then it seems to me you should accept that as long as an artificial neuron created the same physical "outputs" as the neuron it replaced (such as neurotransmitter molecules and electromagnetic fields), then the behavior of surrounding neurons should be unaffected. If you object to physical reductionism, or if you don't object to it but somehow still reject the idea that it would be possible to predict a real neuron's "outputs" with a computer simulation, or reject the idea that as long as the outputs at the boundary of the original neuron were unchanged the other neurons wouldn't behave any differently, please make it clear so I can understand what specific premise of Chalmers' thought-experiment you are rejecting.
> Jesse                                   

Craig Weinberg

unread,
Jul 13, 2011, 8:04:19 PM7/13/11
to Everything List
>Again, all that matters is that the *outputs* that influence other neurons are just like those of a real neuron, any *internal* processes in the substitute are just supposed to be >artificial simulations of what goes on in a real neuron, so there might be simulated genes (in a simulation running on something like a silicon chip or other future computing >technology) but there'd be no need for actual DNA molecules inside the substitute.

The assumption is that there is a meaningful difference between the
processes physically within the cell and those that are input and
output between the cells. That is not my view. Just as the glowing
blue chair you are imagining now (is it a recliner? A futuristic
cartoon?) is not physically present in any neuron or group of neurons
in your skull - under any imaging system or magnification. My idea of
'interior' is different from the physical inside of the cell body of a
neuron. It is the interior topology. It's not even a place, it's just
a sensorimotive awareness of itself and it's surroundings - hanging on
to it's neighbors, reaching out to connect, expanding and contracting
with the mood of the collective. This is what consciousness is. This
is who we are. The closer you get to the exact nature of the neuron,
the closer you get to human consciousness. If you insist upon using
inorganic materials, that really limits the degree to which the
feelings it can host will be similar. Why wouldn't you need DNA to
feel like something based on DNA in practically every one of it's
cells?

>The idea is just that *some* sufficiently detailed digital simulation would behave just like real neurons and a real brain, and "functionalism" as a philosophical view says that this >simulation would have the same mental properties (such as qualia, if the functionalist thinks of "qualia" as something more than just a name for a certain type of physical process) >as the original brain

A digital simulation is just a pattern in an abacus. If you've got a
gigantic abacus and a helicopter, you can make something that looks
like whatever you want it to look like from a distance, but it's still
just an abacus. It has no subjectivity beyond the physical materials
that make up the beads.

>Everything internal to the boundary of the neuron is simulated, possibly using materials that have no resemblance to biological ones.

It's a dynamic system, there is no boundary like that. The
neurotransmitters are produced by and received within the neurons
themselves. If something produces and metabolizes biological
molecules, then it is functioning at a biochemical level and not at
the level of a digital electronic simulation. If you have a heat sink
for your device it's electromotive. If you have an insulin pump it's
biological, if you have a serotonin reuptake receptor, it's
neurological.

>So if you replace the inside of one volume with a very different system that nevertheless emits the same pattern of particles at the boundary of the volume, systems in other >adjacent volumes "don't know the difference" and their behavior is unaffected.

No, I don't that's how living things work. Remember that people's
bodies often reject living tissue transplanted from other human
beings.

>You didn't address my question about whether you agree or disagree with physical reductionism in my last post, can you please do that in your next response to me?

I agree with physical reductionism as far as the physical side of
things is concerned. Qualia is the opposite that would be subject to
experiential irreductionism. Which is why you can print Shakespeare on
a poster or a fortune cookie and it's still Shakeapeare, but you can't
make enriched uranium out of corned beef or a human brain out of table
salt.

>Because I'm just talking about the behavioral aspects of consciousness now, since it's not clear if you actually accept or reject the premise that it would be possible to replace >neurons with functional equivalents that would leave *behavior* unaffected

I'm rejecting the premise that there is a such thing as a functional
replacement for a neuron that is sufficiently different from a neuron
that it would matter. You can make a prosthetic appliance which your
nervous system will make do with, but it can't replace the nervous
system altogether. The nervous system predicts and guesses. It can
route around damage or utilize a device which it can understand how to
use.

>first I want to focus on this issue of whether you accept that in principle it would be possible to replace neurons with "functional equivalents" which emit the same signals to other >neurons but have a totally different internal structure, and whether you accept that this would leave behavior unchanged, both for nearby neurons and the muscle movements of the >body as a whole.

This is tautological. You are making a nonsense distinction between
it's 'internal' structure and what it does. If the internal structure
is equivalent enough, then it will be functionally equivalent to other
neurons and the organism at large. If it's not, then it won't be.
Interior mechanics that produce organic molecules and absorb them
through a semipermeable membrane are biological cells. If you can make
something that does that out of something other than nucleic acids,
then cool, but why bother? Just build the cell you want
nanotechnologically.

>Again, not talking about consciousness at the moment, just behaviors that we associate with consciousness. That's why, in answer to your question about synthetic water, I >imagined a robot whose limb movements depend on the motions of water in an internal tank, and pointed out that if you replaced the tank with a sufficiently good simulation, the >external limb movements of the robot shouldn't be any different.

If you are interested in the behaviors of consciousness only, all you
have to do is watch a youtube and you will see a simulated
consciousness behaving. Can you produce something that acts like it's
conscious? Of course.

>My point was that if you agree that the basic notion of "Darwinian evolution" is purely a matter of organization and not the details of what a system is made of (Do you in fact agree >with that? Regardless of whether it might be *easier* to implement Darwinian evolution in an organic system, hopefully you wouldn't say it's in-principle impossible to implement >self-replication with heredity and mutation in a non-organic system?), then it's clear that in general it cannot be true that "Feature X which we see in organic systems is purely a >matter of organization" implies "We should expect to see natural examples of Feature X in non-organic systems as well".

It's a false equivalence. Darwinian evolution is a relational
abstraction and consciousness or life is a concrete experience. The
fact that we can call anything which follows a statistical pattern of
iterative selection 'Darwinian evolution' just means that it is a
basic relation of self-replicating elements in a dynamic mechanical
system. That living matter and consciousness only appears out of a
particular recipe of organic molecules doesn't mean that there can't
be another recipe, however it does tend to support the observation
that life and consciousness is made out of some things and not others,
and certainly it supports that it is not likely a phenomenon which can
be produced by combinations of anything physical, let alone something
purely computational.

On Jul 13, 1:23 pm, Jesse Mazer <laserma...@hotmail.com> wrote:
> Craig Weinberg wrote:
> >It's weird, I get an error when I try to reply in any way to your last post. Here's what I'm trying to Reply:
> >The crux of the whole issue is what we mean by functionally indistinguishable.
>
> But I specified what I meant (and what I presume Chalmers meant)--that any physical influences such as neurotransmitters that other neurons respond to (in terms of the timing of their own electrochemical pulses, and the growth and death of their synapses) are still emitted by the substitute, so that the other neurons "can't tell the difference" and their behavior is unchanged from what it would be if the neuron hadn't been replaced by an artificial substitute.>If you aren't talking about silicon chips or digital simulation, then you are talking about a different level of function. Would your artificial >neuron synthesize neurotransmitters, detect and respond to neurotransmitters, even emulate genetics?
>
> I said that it would emit neurotransmitters--whether it synthesized them internally or had a supply that was periodically replenished by nanobots or something is irrelevant. Again, all that matters is that the *outputs* that influence other neurons are just like those of a real neuron, any *internal* processes in the substitute are just supposed to be artificial simulations of what goes on in a real neuron, so there might be simulated genes (in a simulation running on something like a silicon chip or other future computing technology) but there'd be no need for actual DNA molecules inside the substitute.>If you get down to the level of the pseudobiological, then the odds of being able to replace neurons successfully gets much higher to >me. To me, that's not what functionalism is about though. I think of functionalism as confidence in a more superficial neural network >simulation of logical nodes. Virtual consciousness.
>
> I don't think functionalism means confidence that the extremely simplified "nodes" of most modern neural networks would be sufficient for a simulated brain that behaved just like a real one, it might well be that much more detailed simulations of individual neurons would be needed for mind uploading. The idea is just that *some* sufficiently detailed digital simulation would behave just like real neurons and a real brain, and "functionalism" as a philosophical view says that this simulation would have the same mental properties (such as qualia, if the functionalist thinks of "qualia" as something more than just a name for a certain type of physical process) as the original brain (see the first sentence defining 'functionalism' athttp://plato.stanford.edu/entries/functionalism/)>If you're going to get down to the biological substitution level of emulating the tissue itself so that the tissue is biologically >indistinguishable from brain tissue, but maybe has some plastic or whatever instead of cytoplasm, then sure, that might work. As long >as you've got real DNA, real ions, real sensitivity to real neurotransmitters, then yeah that could work.
> > You can simulate the large-scale behavior of water using only the basic quantum laws that govern interactions between the charged particles that make up the atoms in each water molecule--seehttp://www.udel.edu/PR/UDaily/2007/mar/water030207.htmlfora discussion. If you had a robot whose external behavior was somehow determined by the behavior of water in an internal hidden tank (say it had some scanners watching the motion of water in that tank, and the scanners would send signals to the robotic limbs based on what they saw), then the external behavior of the robot should be unchanged if you replaced the actual water tank with a sufficiently detailed simulation of a water tank of that size.

Jason Resch

unread,
Jul 13, 2011, 9:16:56 PM7/13/11
to everyth...@googlegroups.com

On Jul 13, 2011, at 7:04 PM, Craig Weinberg <whats...@gmail.com>
wrote:

>> Again, all that matters is that the *outputs* that influence other
>> neurons are just like those of a real neuron, any *internal*
>> processes in the substitute are just supposed to be >artificial
>> simulations of what goes on in a real neuron, so there might be
>> simulated genes (in a simulation running on something like a
>> silicon chip or other future computing >technology) but there'd be
>> no need for actual DNA molecules inside the substitute.
>
> The assumption is that there is a meaningful difference between the
> processes physically within the cell and those that are input and
> output between the cells. That is not my view. Just as the glowing
> blue chair you are imagining now (is it a recliner? A futuristic
> cartoon?) is not physically present in any neuron or group of neurons
> in your skull -

If it is not present physically, then what causes a person to say "I
am imagining a blue chair"?

> under any imaging system or magnification. My idea of
> 'interior' is different from the physical inside of the cell body of a
> neuron. It is the interior topology. It's not even a place, it's just
> a sensorimotive

Could you please define this term? I looked it up but the
definitions I found did not seem to fit.

> awareness of itself and it's surroundings - hanging on
> to it's neighbors, reaching out to connect, expanding and contracting
> with the mood of the collective. This is what consciousness is. This
> is who we are. The closer you get to the exact nature of the neuron,
> the closer you get to human consciousness.

There is such a thing as too low a level. What leads you to believe
the neuron is the appropriate level to find qualia, rather than the
states of neuron groups or the whole brain? Taking the opposite
direction, why not say it must be explained in terms if chemistry or
quarks? What led you to conclude it is the neurons? Afterall, are
rat neurons very different from human neurons? Do rats have the same
range of qualia as we?

> If you insist upon using
> inorganic materials, that really limits the degree to which the
> feelings it can host will be similar.

Assuming qualia supervene on the individual cells or their chemistry.

> Why wouldn't you need DNA to
> feel like something based on DNA in practically every one of it's
> cells?

You would have to show that the presence of DNA in part determines the
evolution of the brains neural network. If not, it is as relevant to
you and your mind as the neutrinos passing through you.

>
>
>> The idea is just that *some* sufficiently detailed digital
>> simulation would behave just like real neurons and a real brain,
>> and "functionalism" as a philosophical view says that this
>> >simulation would have the same mental properties (such as qualia,
>> if the functionalist thinks of "qualia" as something more than just
>> a name for a certain type of physical process) >as the original brain
>
> A digital simulation is just a pattern in an abacus.

The state of an abacus is just a number, not a process. I think you
may not have a full understanding of the differences between a turing
machine and a string of bits. A Turing machine can mimick any process
that is defineable and does not take an infinite number of steps.
Turing machines are dynamic, self-directed entities. This
distinguishes them from cartoons, YouTube videos and the state if an
abacus.

Since they have such a universal capability to mimick processes, then
the idea that the brain is a process leads naturally to the idea of
intelligent computers which could function identically to organic
brains.

Then, if you deny the logical possibilitt of zombies, or fading
qualia, you must accept such an emulation of a human mind would be
equally conscious.

> If you've got a
> gigantic abacus and a helicopter, you can make something that looks
> like whatever you want it to look like from a distance, but it's still
> just an abacus. It has no subjectivity beyond the physical materials
> that make up the beads.

The idea behind a computer simulation of a mind is not to make
something that looks like a brain but to make something that behaves
and works like a brain.

>
>
>> Everything internal to the boundary of the neuron is simulated,
>> possibly using materials that have no resemblance to biological ones.
>
> It's a dynamic system,

So is a turing machine.

> there is no boundary like that. The
> neurotransmitters are produced by and received within the neurons
> themselves. If something produces and metabolizes biological
> molecules, then it is functioning at a biochemical level and not at
> the level of a digital electronic simulation. If you have a heat sink
> for your device it's electromotive. If you have an insulin pump it's
> biological, if you have a serotonin reuptake receptor, it's
> neurological.
>
>> So if you replace the inside of one volume with a very different
>> system that nevertheless emits the same pattern of particles at the
>> boundary of the volume, systems in other >adjacent volumes "don't
>> know the difference" and their behavior is unaffected.
>
> No, I don't that's how living things work. Remember that people's
> bodies often reject living tissue transplanted from other human
> beings.

Rejection requires the body knowing there is a difference, which is
against the starting assumption.

>
>
>> You didn't address my question about whether you agree or disagree
>> with physical reductionism in my last post, can you please do that
>> in your next response to me?
>
> I agree with physical reductionism as far as the physical side of
> things is concerned. Qualia is the opposite that would be subject to
> experiential irreductionism. Which is why you can print Shakespeare on
> a poster or a fortune cookie and it's still Shakeapeare, but you can't
> make enriched uranium out of corned beef or a human brain out of table
> salt.
>
>> Because I'm just talking about the behavioral aspects of
>> consciousness now, since it's not clear if you actually accept or
>> reject the premise that it would be possible to replace >neurons
>> with functional equivalents that would leave *behavior* unaffected
>
> I'm rejecting the premise that there is a such thing as a functional
> replacement for a neuron that is sufficiently different from a neuron
> that it would matter.

I pasted real life counter examples to this. Artificial cochlea and
retinas.

> You can make a prosthetic appliance which your
> nervous system will make do with, but it can't replace the nervous
> system altogether.

At what point does the replacement magically stop working?

> The nervous system predicts and guesses. It can
> route around damage or utilize a device which it can understand how to
> use.

So it can use an artificial retina but not an artificial neuron?

> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>
> hl=en.
>

Jesse Mazer

unread,
Jul 13, 2011, 10:12:20 PM7/13/11
to everyth...@googlegroups.com


> Date: Wed, 13 Jul 2011 17:04:19 -0700
> Subject: Re: bruno list
> From: whats...@gmail.com
> To: everyth...@googlegroups.com


> >Again, all that matters is that the *outputs* that influence other neurons are just like those of a real neuron, any *internal* processes in the substitute are just supposed to be >artificial simulations of what goes on in a real neuron, so there might be simulated genes (in a simulation running on something like a silicon chip or other future computing >technology) but there'd be no need for actual DNA molecules inside the substitute.

> The assumption is that there is a meaningful difference between the
> processes physically within the cell and those that are input and
> output between the cells. That is not my view. Just as the glowing
> blue chair you are imagining now (is it a recliner? A futuristic
> cartoon?) is not physically present in any neuron or group of neurons
> in your skull - under any imaging system or magnification. My idea of
> 'interior' is different from the physical inside of the cell body of a
> neuron. It is the interior topology. It's not even a place, it's just
> a sensorimotive awareness of itself and it's surroundings - hanging on
> to it's neighbors, reaching out to connect, expanding and contracting
> with the mood of the collective. This is what consciousness is. This
> is who we are.

You're misunderstanding what I meant by "internal", I wasn't talking about subjective interiority (qualia), but *only* about the physical processes in the spatial interior of the cell. I am trying to first concentrate on external behavioral issues that don't involve qualia at all, to see whether your disagreement with Chalmers' argument is because you disagree with the basic starting premise that it would be possible to replace neurons by artificial substitutes which would not alter the *behavior* of surrounding neurons (or of the person as a whole), only after assuming this does Chalmers go on to speculate about what would happen to qualia as neurons were gradually replaced in this way. Remember this paragraph from my last post:

"Because I'm just talking about the behavioral aspects of consciousness now, since it's not clear if you actually accept or reject the premise that it would be possible to replace neurons with functional equivalents that would leave *behavior* unaffected (both the behavior of other nearby neurons, and behavior of the whole person in the form of muscle movement triggered by neural signals, including speech about what the person was feeling). If you do accept that premise, then we can move on to Chalmers' argument about the implausibility of dancing/fading qualia in situations where behavior is completely unaffected--you also have not really given a clear answer to the question of whether you think there could be situations where behavior is completely unaffected but qualia are changing or fading. But one thing at a time, first I want to focus on this issue of whether you accept that in principle it would be possible to replace neurons with "functional equivalents" which emit the same signals to other neurons but have a totally different internal structure, and whether you accept that this would leave behavior unchanged, both for nearby neurons and the muscle movements of the body as a whole."

The reason I want to separate these two issues, and first deal only with physical behaviors, is that in your original answer to my question about Chalmers' thought-experiment you made several comments suggesting there would be behavioral changes, like the suggestion that replacing parts of the brain with artificial substitutes would cause "dementia" (which normally leads to changes in behavior) and the suggestion that "the native neurology may strengthen it's remaining connections and attempt to compensate for the implants with neuroplasticity, routing around the 'damage'." So please, until we have this issue settled of whether it would be possible in principle to create substitutes which caused no behavioral changes in surrounding neurons or in the whole person, can we leave aside issues relating to qualia and subjectivity?



> >Everything internal to the boundary of the neuron is simulated, possibly using materials that have no resemblance to biological ones.

> It's a dynamic system, there is no boundary like that.

If you accept reductionism and accept that all interactions between the basic units are *local* ones, then you can divide up any complex system into a collection of volumes in absolutely any way you please (you don't have to pick volumes that correspond to 'natural' boundaries like the edges of a cell), and it will always be true that physical processes in one volume can only be influenced by other volumes via local influences (like molecules or photons) coming through that system's boundary. If you don't agree with this I don't think you understand the basic idea of a reductionist theory based on local interactions.

>The
> neurotransmitters are produced by and received within the neurons
> themselves.

Sure, but other neurons don't know anything about the history of neurotransmitter molecules arriving at their own "input" synapses, if exactly the same neurotransmitter molecules were arriving they wouldn't behave differently depending on whether those molecules had been synthesized inside a cell or were constructed by a nanobot or something.



> >So if you replace the inside of one volume with a very different system that nevertheless emits the same pattern of particles at the boundary of the volume, systems in other >adjacent volumes "don't know the difference" and their behavior is unaffected.

> No, I don't that's how living things work. Remember that people's
> bodies often reject living tissue transplanted from other human
> beings.

Why do you think that's a reason to reject the local reductionist principle I suggest? In a local reductionist theory, presumably the reason that my cells reject foreign tissue has to do with the foreign tissue giving off molecules that don't match the ones given off by my own cells, and my cells picking up those molecules and reacting to them. See for example the discussion of "histocompatibility molecules" at http://users.rcn.com/jkimball.ma.ultranet/BiologyPages/H/HLA.html

Are you suggesting that even if the molecules given off by foreign cells were no different at all from those given off by my own cells, my cells would nevertheless somehow be able to nonlocally sense that the DNA in the nuclei of these cells was foreign?


> >You didn't address my question about whether you agree or disagree with physical reductionism in my last post, can you please do that in your next response to me?

> I agree with physical reductionism as far as the physical side of
> things is concerned. 

Well, it's not clear to me that you understand the implications of physical reductionism based on your rejection of my comments about physical processes in one volume only being affected via signals coming across the boundary. Unless the issue is that you accept physical reductionism, but reject the idea that we can treat all interactions as being local ones (and again I would point out that while entanglement may involve a type of nonlocal interaction--though this isn't totally clear, many-worlds advocates say they can explain entanglement phenomena in a local way--because of decoherence, it probably isn't important for understanding how different neurons interact with one another). 


> >Because I'm just talking about the behavioral aspects of consciousness now, since it's not clear if you actually accept or reject the premise that it would be possible to replace >neurons with functional equivalents that would leave *behavior* unaffected

> I'm rejecting the premise that there is a such thing as a functional
> replacement for a neuron that is sufficiently different from a neuron
> that it would matter.

And is that because you reject the idea that in any volume of space, physical processes outside that volume can only be affected by processes in its interior via particles (or other local signals) crossing the boundary of that volume?



> >first I want to focus on this issue of whether you accept that in principle it would be possible to replace neurons with "functional equivalents" which emit the same signals to other >neurons but have a totally different internal structure, and whether you accept that this would leave behavior unchanged, both for nearby neurons and the muscle movements of the >body as a whole.

> This is tautological. You are making a nonsense distinction between
> it's 'internal' structure and what it does. If the internal structure
> is equivalent enough, then it will be functionally equivalent to other
> neurons and the organism at large.

I don't know what you mean by "functionally equivalent" though, are you using that phrase to suggest some sort of similarity in the actual molecules and physical structure of what's inside the boundary? My point is that it's perfectly possible to imagine replacing a neuron with something that has a totally different physical structure, like a tiny carbon nanotube computer, but that it's sensing incoming neurotransmitter molecules (and any other relevant physical inputs from nearby cells) and calculating how the original neuron would have behaved in response to those inputs if it were still there, and using those calculations to figure out what signals the neuron would have been sending out of the boundary, then making sure to send the exact same signals itself (again, imagine that it has a store of neurotransmitters which can be sent out of an artificial synapse into the synaptic gap connected to some other neuron). So it *is* "functionally equivalent" if by "function" you just mean what output signals it transmits in response to what input signals, but it's not functionally equivalent if you're talking about its actual internal structure.

Also note that these hypothetical carbon nanotube computers only need to emit actual neurotransmitters at points where they interface with regular biological cells. As you replace more and more biological cells with substitutes you could start to have synapses where one artificial neuron is connected to another artificial neuron, then they could dispense with the step of sending actual neurotransmitter molecules through the gap and instead just simulate this process to figure out how one artificial neuron should influence the other.

If it's not, then it won't be.
> Interior mechanics that produce organic molecules and absorb them
> through a semipermeable membrane are biological cells. If you can make
> something that does that out of something other than nucleic acids,
> then cool, but why bother? 

No one is suggesting this would be a useful thing to do practically, it's a philosophical thought-experiment. If you do accept that it would be possible in principle to gradually replace real neurons with artificial ones in a way that wouldn't change the behavior of the remaining real neurons and wouldn't change the behavior of the person as a whole, but with the artificial ones having a very different internal structure and material composition than the real ones, then we can move on to Chalmer's argument about why this sort of behavioral indistinguishability suggests qualia probably wouldn't change either. But as I said I don't want to discuss that unless we're clear on whether you accept the original premise of the thought-experiment.


> >Again, not talking about consciousness at the moment, just behaviors that we associate with consciousness. That's why, in answer to your question about synthetic water, I >imagined a robot whose limb movements depend on the motions of water in an internal tank, and pointed out that if you replaced the tank with a sufficiently good simulation, the >external limb movements of the robot shouldn't be any different.

> If you are interested in the behaviors of consciousness only, all you
> have to do is watch a youtube and you will see a simulated
> consciousness behaving. 

That's just a recording of something that actually happened to a biological consciousness, not a simulation which can respond to novel external stimuli (like new questions I can think to ask it) which weren't presented to any biological original.


> >My point was that if you agree that the basic notion of "Darwinian evolution" is purely a matter of organization and not the details of what a system is made of (Do you in fact agree >with that? Regardless of whether it might be *easier* to implement Darwinian evolution in an organic system, hopefully you wouldn't say it's in-principle impossible to implement >self-replication with heredity and mutation in a non-organic system?), then it's clear that in general it cannot be true that "Feature X which we see in organic systems is purely a >matter of organization" implies "We should expect to see natural examples of Feature X in non-organic systems as well".

> It's a false equivalence. Darwinian evolution is a relational
> abstraction and consciousness or life is a concrete experience.

But when you originally asked why we don't "see" consciousness in non-biological systems, I figured you were talking about the external behaviors we associate with consciousness, not inner experience. After all we have no way of knowing the inner experience of any system but ourselves, we only infer that other beings have similar inner experiences based on similar external behaviors. If you want to just talk about inner experience, again we should first clear up whether you can accept the basic premise of Chalmers' thought experiment, then if you do we can move on to talking about what it implies for inner experience.

Jesse

Craig Weinberg

unread,
Jul 14, 2011, 12:08:40 AM7/14/11
to Everything List
>If it is not present physically, then what causes a person to say "I
>am imagining a blue chair"?

A sensorimotive circuit. A sensory feeling which is a desire to
fulfill itself through the motive impulse to communicate that
statement.

>Could you please define this term? I looked it up but the
>definitions I found did not seem to fit.

Nerves are referred to as afferent and efferent also. My idea is that
all nerve functionality is sense (input) and motive (output). I would
say motor, but it's confusing because something like changing your
mind or making a choice is motive but not physically expressed as
motor activity, but I think that they are the same thing. I am
generalizing what nerves do to the level of physics, so that our
nerves are doing the same thing that all matter is doing, just
hypertrophied to host more meta-elaborated sensorimotive phenomena.

>There is such a thing as too low a level. What leads you to believe
>the neuron is the appropriate level to find qualia, rather than the
>states of neuron groups or the whole brain?

I didn't say it was. I was just talking about the more similar you can
get to imitating a human neuron, the more similar a brain based on
that imitation will be to having the potential for human
consciousness.

>You would have to show that the presence of DNA in part determines the
>evolution of the brains neural network. If not, it is as relevant to
>you and your mind as the neutrinos passing through you.

Chromosome mutations cause mutations in the brain's neural network, do
they not? btw, I don't interpret neutrinos, photons, or other massless
chargeless phenomena as literal particles. QM is a misinterpretation.
Accurate, but misinterpreted.

>> A digital simulation is just a pattern in an abacus.

>The state of an abacus is just a number, not a process. I think you
>may not have a full understanding of the differences between a turing
>machine and a string of bits. A Turing machine can mimick any process
>that is defineable and does not take an infinite number of steps.
>Turing machines are dynamic, self-directed entities. This
>distinguishes them from cartoons, YouTube videos and the state if an
>abacus.

A pattern is not necessarily static, especially not an abacus, the
purpose of which is to be able to change the positions to any number.
Just like a cartoon. If you are defining Turing machines as self-
directed entities then you have already defined them as conscious, so
it's a fallacy to present it as a question. Since I think that a
machine cannot have a self, but is instead the self's perception of
the self's opposite, I'm not compelled by any arguments which imagine
that purely quantitative phenomena (if there were such a thing) can be
made to feel.

>Then, if you deny the logical possibilitt of zombies, or fading
>qualia, you must accept such an emulation of a human mind would be
>equally conscious.

These ideas are not applicable in my model of consciousness and it's
relation to neurology.

>The idea behind a computer simulation of a mind is not to make
>something that looks like a brain but to make something that behaves
>and works like a brain.

I think that for it to work exactly like a brain it has to be a brain.
If you want something that behaves like an intelligent automaton, then
you can use a machine made of inorganic matter. If you want something
that feels and behaves like a living organism then you have to create
a living organism out of matter that can self replicate and die.

>Rejection requires the body knowing there is a difference, which is
>against the starting assumption.

If you are already defining something as biologically identical, then
you are effectively asking 'if something non-biological were
biological, would it perform biological functions?'

>I pasted real life counter examples to this. Artificial cochlea and
>retinas.

Those are not replacements for neurons, they are prostheses for a
nervous system. Big difference. I can replace a car engine with
horses, but I can't replace a horse's brain with a car engine.

>At what point does the replacement magically stop working?

At what point does cancer magically stop you from waking up?

>So it can use an artificial retina but not an artificial neuron?

A neuron can use an artificial neuron but a person can't use an
artificial neuron except through a living neuron.

Craig

On Jul 13, 9:16 pm, Jason Resch <jasonre...@gmail.com> wrote:
> On Jul 13, 2011, at 7:04 PM, Craig Weinberg <whatsons...@gmail.com>  
> ...
>
> read more »

Jason Resch

unread,
Jul 14, 2011, 1:55:42 AM7/14/11
to everyth...@googlegroups.com
On Wed, Jul 13, 2011 at 11:08 PM, Craig Weinberg <whats...@gmail.com> wrote:
>If it is not present physically, then what causes a person to say "I
>am imagining a blue chair"?

A sensorimotive circuit. A sensory feeling which is a desire to
fulfill itself through the motive impulse to communicate that
statement.

But physical effects must come from physical causes unless your theory involves some form of dualism.  The imagined image in the mind has some physical representation, otherwise any communication regarding that imagined image would be coming from no where.
 

>Could you please define this term?  I looked it up but the
>definitions  I found did not seem to fit.

Nerves are referred to as afferent and efferent also. My idea is that
all nerve functionality is sense (input) and motive (output). I would
say motor, but it's confusing because something like changing your
mind or making a choice is motive but not physically expressed as
motor activity, but I think that they are the same thing. I am
generalizing what nerves do to the level of physics, so that our
nerves are doing the same thing that all matter is doing, just
hypertrophied to host more meta-elaborated sensorimotive phenomena.

>There is such a thing as too low a level.  What leads you to believe
>the neuron is the appropriate level to find qualia, rather than the
>states of neuron groups or the whole brain?

I didn't say it was. I was just talking about the more similar you can
get to imitating a human neuron, the more similar a brain based on
that imitation will be to having the potential for human
consciousness.

>You would have to show that the presence of DNA in part determines the
>evolution of the brains neural network.  If not, it is as relevant to
>you and your mind as the neutrinos passing through you.

Chromosome mutations cause mutations in the brain's neural network, do
they not?

Perhaps very rarely it could, but this would be more a malfunction than general behavior.  The question is, what does DNA have to do with the function of an active brain which is thinking or experiencing?  If the neurons behaved the same way without it, why should consciousness be impacted?
 
btw, I don't interpret neutrinos, photons, or other massless
chargeless phenomena as literal particles. QM is a misinterpretation.
Accurate, but misinterpreted.

Whatever you consider them to be, they are physical but not thought to be important to the general operation of the brain.  My original point is there is a lot of noise, and perhaps included in that noise is all the biochemistry itself going on in the background while neurons perform their function.  And therefore, anything which is noise doesn't need to be replicated in an artificial production of a brain.
 

>> A digital simulation is just a pattern in an abacus.

>The state of an abacus is just a number, not a process.  I think you
>may not have a full understanding of the differences between a turing
>machine and a string of bits.  A Turing machine can mimick any process
>that is defineable and does not take an infinite number of steps.
>Turing machines are dynamic, self-directed entities.  This
>distinguishes them from cartoons, YouTube videos and the state if an
>abacus.

A pattern is not necessarily static, especially not an abacus, the
purpose of which is to be able to change the positions to any number.
Just like a cartoon.

Okay, but with an abacaus, or a cartoon, someone else is driving it, and perhaps randomly.  A cartoon does not draw itself, nor an abacus perform computations on its own.
 
If you are defining Turing machines as self-
directed entities then you have already defined them as conscious, so
it's a fallacy to present it as a question.

Ignore the "self" in self-directed, it was intended to mean they are autonomous, not define that they are conscious.
 
Since I think that a
machine cannot have a self, but is instead the self's perception of
the self's opposite, I'm not compelled by any arguments which imagine
that purely quantitative phenomena (if there were such a thing) can be
made to feel.

"Purely quantitative" suggests that the only values that can be represented by a machine are pure quantities (numbers, values, magnitudes).  Yet a Turing machine can represent an infinite number of relations which are not purely quantitative.  For example, an algorithm might determine whether an input number is prime or not, and based on the result set a bit as a 1 or a 0.  Now if that bit is an input to another function, that bit no longer represents the quantity of 1 or 0, but instead now represents the "qualitative" property of the input number's primality or compositeness.  There may be other qualitative values that with the right processing and interpretation by the right functions could correspond to qualitative properties such as colors.  You can't write off Turing machines as only dealing with numbers.  The possible relations, functions, and processes a Turing machine can implement result in an infinitely varied, deep, complex, and rich landscape of possibilities.


>Then, if you deny the logical possibilitt of zombies, or fading
>qualia, you must accept such an emulation of a human mind would be
>equally conscious.

These ideas are not applicable in my model of consciousness and it's
relation to neurology.

Either zombies are possible within your model or they are not.  Either fading qualia is possible in your model or it is not.  You can't define them as irrelevant in your theory to  avoid answering the though questions. :-)
 

>The idea behind a computer simulation of a mind is not to make
>something that looks like a brain but to make something that behaves
>and works like a brain.

I think that for it to work exactly like a brain it has to be a brain.

 
If you want something that behaves like an intelligent automaton, then
you can use a machine made of inorganic matter.

Okay I agree with this so far.
 
If you want something
that feels and behaves like a living organism

I am confused, are you saying an inorganic machine can only behave like an automaton, or can it behave like a living organism?  Do you believe it is possible for an inorganic machine to exhibit identical external behavior to a living organism in all situations?  (A YouTube video can't respond to questions, and therefore would not count)
 
then you have to create
a living organism out of matter that can self replicate and die.

What does self-replication and death have to do with what a mind feels at any point in time?  Aren't eunuchs conscious, what about someone who planned to freeze himself so he wouldn't die?
 

>Rejection requires the body knowing there is a difference, which is
>against the starting assumption.

If you are already defining something as biologically identical, then
you are effectively asking 'if something non-biological were
biological, would it perform biological functions?'

It was not identical, the interfaces, all the points that made contact with the outside were identical but the insides were completely different.
 

>I pasted real life counter examples to this.  Artificial cochlea and
>retinas.

Those are not replacements for neurons,

Actually the retina prosthesis replaces neurons which perform processing, and thus those neurons are considered an extension of the brain.
 
they are prostheses for a
nervous system. Big difference.

What is different about neurons in the nervous system vs. neurons in the brain?  Why is it we can substitute neurons in the nervous system without problem, but you suggest this fails if we move any deeper into the brain?  To me, the only difference is the complex way in which they are connected.
 
I can replace a car engine with
horses, but I can't replace a horse's brain with a car engine.

>At what point does the replacement magically stop working?

At what point does cancer magically stop you from waking up?


Cancer cells don't serve as functional replacements for healthy cells, where according to the thought experiment, the neural prosthesis would.  The question of when consciousness suddenly disappears, fades, dances, etc., if it does at all, during a neuron replacement is an interesting and illuminating question for any theory of mind, and it is something you should attempt to answer using your theory.
 
>So it can use an artificial retina but not an artificial neuron?

A neuron can use an artificial neuron but a person can't use an
artificial neuron except through a living neuron.


Interesting.  So do you think a person could have every part of their brain substituted with a prosthesis, with the exception of one neuron, and still be conscious?  Why or why not?

Jason
Message has been deleted

Craig Weinberg

unread,
Jul 14, 2011, 8:45:55 AM7/14/11
to Everything List
>You're misunderstanding what I meant by "internal", I wasn't talking about
>subjective interiority (qualia), but *only* about the physical processes in
>the spatial interior of the cell. I am trying to first concentrate on
>external behavioral issues that don't involve qualia at all, to see whether
>your disagreement with Chalmers' argument is because you disagree with the
>basic starting premise that it would be possible to replace neurons by
>artificial substitutes which would not alter the *behavior* of surrounding
>neurons (or of the person as a whole), only after assuming this does
>Chalmers go on to speculate about what would happen to qualia as neurons
>were gradually replaced in this way. Remember this paragraph from my last
>post:

In my model, physical processes are just the exterior, like clothing
of the
qualia (perceivable experiences). There is no such thing as
external
behavior that doesn't involve qualia, that's my point. It's all one
thing -
sensorimotive perception of relativistic electromagnetism. I think
that in
the best case scenario, what happens when you virtualize your brain
with a
non-biological neuron emulation is that you gradually lose
consciousness but
the remaining consciousness has more and more technology at it's
disposal.
You can't remember your own name but when asked, there would be
a
meaningless word that comes to mind for no reason. To me, the only
question
is how virtual is virtual. If you emulate the biology, that's a
completely
different scenario than running a logical program on a chip. Logic
doesn't
ooze
serotonin.

>Are you suggesting that even if the molecules given off by foreign cells
>were no different at all from those given off by my own cells, my cells
>would nevertheless somehow be able to nonlocally sense that the DNA in the
>nuclei of these cells was foreign?

It's not about whether other cells would sense the imposter neuron,
it's
about how much of an imposter the neuron is. If acts like a real cell
in
every physical way, if another organism can kill it and eat it
and
metabolize it completely then you pretty much have a cell. Whatever
cannot
be metabolized in that way is what potentially detracts from the
ability to
sustain consciousness. It's not your cells that need to sense DNA,
it's the
question of whether a brain composed entirely of, or significantly of
cells
lacking DNA would be conscious in the same way as a
person.

>Well, it's not clear to me that you understand the implications of physical
>reductionism based on your rejection of my comments about physical processes
>in one volume only being affected via signals coming across the boundary.
>Unless the issue is that you accept physical reductionism, but reject the
>idea that we can treat all interactions as being local ones (and again I
>would point out that while entanglement may involve a type of nonlocal
>interaction--though this isn't totally clear, many-worlds advocates say they
>can explain entanglement phenomena in a local way--because of decoherence,
>it probably isn't important for understanding how different neurons interact
>with one another).

It's not clear that you are understanding that my model of physics is
not
the same as yours. Imagine an ideal glove that is white on the outside
and
on the inside it feels like latex. As you move your hand in the glove
you
feel all sorts of things on the inside. Textures, shapes. etc. From
the
outside you see different patterns appearing on it. When you clench
your
fist, you can see right through the glove to your hand, but when you
do,
your hand goes completely numb and you can't feel the glove. What you
are
telling me is that if you make a glove that looks exactly like this
crazy
glove, if it satisfies all glove like properties such that it makes
these
crazy designs on the outside, that it must be having the same effect
on the
inside. My position is that no, not unless it is close enough to the
real
clove physically that it produces the same effects on the inside,
which you
cannot know unless you are wearing the
glove.

>And is that because you reject the idea that in any volume of space,
>physical processes outside that volume can only be affected by processes in
>its interior via particles (or other local signals) crossing the boundary of
>that volume?

No, it's because the qualia possible in inorganic systems is limited
to
inorganic qualia. Think of consciousness as DNA. Can you make DNA out
of
string? You could make a really amazing model of it out of string, but
it's
not going to do what DNA does. You are saying, well what if I make DNA
out
of something that acts just like DNA? I'm asking, like what? If it
acts like
DNA in every way, then it isn't an emulation, it's just DNA by another
name.

>I don't know what you mean by "functionally equivalent" though, are you
>using that phrase to suggest some sort of similarity in the actual molecules
>and physical structure of what's inside the boundary?

I'm using that phrase because you are. I'm just saying that what the
cell is
causes what the cell does. You can try to change what the cell is but
retain
what you think is what the cell does, but how much you change it
increases
the odds that you are changing something that you have no way of
knowing is
important.

>My point is that it's perfectly possible to imagine replacing a neuron with
>something that has a totally different physical structure, like a tiny
>carbon nanotube computer, but that it's sensing incoming neurotransmitter
>molecules (and any other relevant physical inputs from nearby cells) and
>calculating how the original neuron would have behaved in response to those
>inputs if it were still there, and using those calculations to figure out
>what signals the neuron would have been sending out of the boundary, then
>making sure to send the exact same signals itself (again, imagine that it
>has a store of neurotransmitters which can be sent out of an artificial
>synapse into the synaptic gap connected to some other neuron). So it *is*
>"functionally equivalent" if by "function" you just mean what output signals
>it transmits in response to what input signals, but it's not functionally
>equivalent if you're talking about its actual internal structure.

But what the signals and neurotransmitters are coming out of is
not
functionally equivalent. The real thing feels and has intent, not
calculates
and imitates. You can't build a machine that feels and has intent out
of
basic units that can only calculate at imitate. It just scales up to
a
sentient being vs a spectacular
automaton.

>If you do accept that it would be possible in principle to gradually replace
>real neurons with artificial ones in a way that wouldn't change the behavior
>of the remaining real neurons and wouldn't change the behavior of the person
>as a whole, but with the artificial ones having a very different internal
>structure and material composition than the real ones, then we can move on
>to Chalmer's argument about why this sort of behavioral indistinguishability
>suggests qualia probably wouldn't change either. But as I said I don't want
>to discuss that unless we're clear on whether you accept the original
>premise of the thought-experiment.

It all depends how different the artificial neurons are. There might
be
other recipes for consciousness and life, but so far, we have no
reason to
believe that inorganic logic can sustain either. For the purposes of
this
thread, let's say no. If it's artificial enough to be called
artificial then
the consciousness associated with it is also
inauthentic.

>That's just a recording of something that actually happened to a biological
>consciousness, not a simulation which can respond to novel external stimuli
>(like new questions I can think to ask it) which weren't presented to any
>biological original.

That's easy. You just make a few hundred YouTubes and associate them
with
some AGI logic. Basically make a video ELIZA (which would actually be
a
fantastic doctorate thesis I would think). Now you can have a
conversation
with your YouTube person in real time. You could even splice
together
phonemes to make them just able to speak English in general and then
hook
them up to a Google translation. Would you then say that if the
AGI
algorithms were good enough - functionally equivalent to human
intelligence
in every way, that the YouTube was
conscious?

>But when you originally asked why we don't "see" consciousness in
>non-biological systems, I figured you were talking about the external
>behaviors we associate with consciousness, not inner experience. After all
>we have no way of knowing the inner experience of any system but ourselves,
>we only infer that other beings have similar inner experiences based on
>similar external behaviors.

That's what I'm trying to tell you. Consciousness is nothing but
inner
experience. It has no external behaviors, we just can recognize our
own
feelings in other things when we can see them do something that
reminds us
of
ourselves.

> If you want to just talk about inner experience, again we should first
>clear up whether you can accept the basic premise of Chalmers' thought
>experiment, then if you do we can move on to talking about what it implies
>for inner experience.

I don't want to talk about inner experience unless you want to. I want
to talk about
fundamental reordering of the cosmos, which if it were correct, would
be
staggeringly important and I have not seen anywhere
else:

1. Mind and body are not merely separate, but perpendicular
topologies of
the same ontological continuum of
sense.
2. The interior of electromagnetism is sensorimotive, the interior
of
determinism is free will, and the interior of general relativity
is

perception.
3. Quantum Mechanics is a misinterpretation of atomic quorum
sensing.
4. Time, space, and gravity are void. Their effects are explained
by
perceptual relativity and sensorimotor
electromagnetism.
5. The "speed of light" *c* is not a speed it's a condition
of
nonlocality or absolute velocity, representing a third state of
physical
relation as the opposite of both stillness and
motion.

It's not about meticulous logical deduction, it's about grasping
the
largest, broadest description of the cosmos possible which doesn't
leave
anything out. I just want to see if this map flies, and if not, why
not?


On Jul 13, 10:12 pm, Jesse Mazer <laserma...@hotmail.com> wrote:
> > Date: Wed, 13 Jul 2011 17:04:19 -0700> Subject: Re: bruno list> From: whatsons...@gmail.com> To: everyth...@googlegroups.com> > >Again, all that matters is that the *outputs* that influence other neurons are just like those of a real neuron, any *internal* processes in the substitute are just supposed to be >artificial simulations of what goes on in a real neuron, so there might be simulated genes (in a simulation running on something like a silicon chip or other future computing >technology) but there'd be no need for actual DNA molecules inside the substitute.> > The assumption is that there is a meaningful difference between the> processes physically within the cell and those that are input and> output between the cells. That is not my view. Just as the glowing> blue chair you are imagining now (is it a recliner? A futuristic> cartoon?) is not physically present in any neuron or group of neurons> in your skull - under any imaging system or magnification. My idea of> 'interior' is different from the physical inside of the cell body of a> neuron. It is the interior topology. It's not even a place, it's just> a sensorimotive awareness of itself and it's surroundings - hanging on> to it's neighbors, reaching out to connect, expanding and contracting> with the mood of the collective. This is what consciousness is. This> is who we are.You're misunderstanding what I meant by "internal", I wasn't talking about subjective interiority (qualia), but *only* about the physical processes in the spatial interior of the cell. I am trying to first concentrate on external behavioral issues that don't involve qualia at all, to see whether your disagreement with Chalmers' argument is because you disagree with the basic starting premise that it would be possible to replace neurons by artificial substitutes which would not alter the *behavior* of surrounding neurons (or of the person as a whole), only after assuming this does Chalmers go on to speculate about what would happen to qualia as neurons were gradually replaced in this way. Remember this paragraph from my last post:"Because I'm just talking about the behavioral aspects of consciousness now, since it's not clear if you actually accept or reject the premise that it would be possible to replace neurons with functional equivalents that would leave *behavior* unaffected (both the behavior of other nearby neurons, and behavior of the whole person in the form of muscle movement triggered by neural signals, including speech about what the person was feeling). If you do accept that premise, then we can move on to Chalmers' argument about the implausibility of dancing/fading qualia in situations where behavior is completely unaffected--you also have not really given a clear answer to the question of whether you think there could be situations where behavior is completely unaffected but qualia are changing or fading. But one thing at a time, first I want to focus on this issue of whether you accept that in principle it would be possible to replace neurons with "functional equivalents" which emit the same signals to other neurons but have a totally different internal structure, and whether you accept that this would leave behavior unchanged, both for nearby neurons and the muscle movements of the body as a whole."The reason I want to separate these two issues, and first deal only with physical behaviors, is that in your original answer to my question about Chalmers' thought-experiment you made several comments suggesting there would be behavioral changes, like the suggestion that replacing parts of the brain with artificial substitutes would cause "dementia" (which normally leads to changes in behavior) and the suggestion that "the native neurology may strengthen it's remaining connections and attempt to compensate for the implants with neuroplasticity, routing around the 'damage'." So please, until we have this issue settled of whether it would be possible in principle to create substitutes which caused no behavioral changes in surrounding neurons or in the whole person, can we leave aside issues relating to qualia and subjectivity?> > >Everything internal to the boundary of the neuron is simulated, possibly using materials that have no resemblance to biological ones.> > It's a dynamic system, there is no boundary like that.If you accept reductionism and accept that all interactions between the basic units are *local* ones, then you can divide up any complex system into a collection of volumes in absolutely any way you please (you don't have to pick volumes that correspond to 'natural' boundaries like the edges of a cell), and it will always be true that physical processes in one volume can only be influenced by other volumes via local influences (like molecules or photons) coming through that system's boundary. If you don't agree with this I don't think you understand the basic idea of a reductionist theory based on local interactions.>The> neurotransmitters are produced by and received within the neurons> themselves.Sure, but other neurons don't know anything about the history of neurotransmitter molecules arriving at their own "input" synapses, if exactly the same neurotransmitter molecules were arriving they wouldn't behave differently depending on whether those molecules had been synthesized inside a cell or were constructed by a nanobot or something.> > >So if you replace the inside of one volume with a very different system that nevertheless emits the same pattern of particles at the boundary of the volume, systems in other >adjacent volumes "don't know the difference" and their behavior is unaffected.> > No, I don't that's how living things work. Remember that people's> bodies often reject living tissue transplanted from other human> beings.Why do you think that's a reason to reject the local reductionist principle I suggest? In a local reductionist theory, presumably the reason that my cells reject foreign tissue has to do with the foreign tissue giving off molecules that don't match the ones given off by my own cells, and my cells picking up those molecules and reacting to them. See for example the discussion of "histocompatibility molecules" athttp://users.rcn.com/jkimball.ma.ultranet/BiologyPages/H/HLA.htmlAreyou suggesting that even if the molecules given off by foreign cells were no different at all from those given off by my own cells, my cells would nevertheless somehow be able to nonlocally sense that the DNA in the nuclei of these cells was foreign?> > >You didn't address my question about whether you agree or disagree with physical reductionism in my last post, can you please do that in your next response to me?> > I agree with physical reductionism as far as the physical side of> things is concerned. Well, it's not clear to me that you understand the implications of physical reductionism based on your rejection of my comments about physical processes in one volume only being affected via signals coming across the boundary. Unless the issue is that you accept physical reductionism, but reject the idea that we can treat all interactions as being local ones (and again I would point out that while entanglement may involve a type of nonlocal interaction--though this isn't totally clear, many-worlds advocates say they can explain entanglement phenomena in a local way--because of decoherence, it probably isn't important for understanding how different neurons interact with one another). > > >Because I'm just talking about the behavioral aspects of consciousness now, since it's not clear if you actually accept or reject the premise that it would be possible to replace >neurons with functional equivalents that would leave *behavior* unaffected> > I'm rejecting the premise that there is a such thing as a functional> replacement for a neuron that is sufficiently different from a neuron> that it would matter.And is that because you reject the idea that in any volume of space, physical processes outside that volume can only be affected by processes in its interior via particles (or other local signals) crossing the boundary of that volume?> > >first I want to focus on this issue of whether you accept that in principle it would be possible to replace neurons with "functional equivalents" which emit the same signals to other >neurons but have a totally different internal structure, and whether you accept that this would leave behavior unchanged, both for nearby neurons and the muscle movements of the >body as a whole.> > This is tautological. You are making a nonsense distinction between> it's 'internal' structure and what it does. If the internal structure> is equivalent enough, then it will be functionally equivalent to other> neurons and the organism at large.I don't know what you mean by "functionally equivalent" though, are you using that phrase to suggest some sort of similarity in the actual molecules and physical structure of what's inside the boundary? My point is that it's perfectly possible to imagine replacing a neuron with something that has a totally different physical structure, like a tiny carbon nanotube computer, but that it's sensing incoming neurotransmitter molecules (and any other relevant physical inputs from nearby cells) and calculating how the original neuron would have behaved in response to those inputs if it were still there, and using those calculations to figure out what signals the neuron would have been sending out of the boundary, then making sure to send the exact same signals itself (again, imagine that it has a store of neurotransmitters which can be sent out of an artificial synapse into the synaptic gap connected to some other neuron). So it *is* "functionally equivalent" if by "function" you just mean what output signals it transmits in response to what input signals, but it's not functionally equivalent if you're talking about its actual internal structure.Also note that these hypothetical carbon nanotube computers only
>
> ...
>
> read more »

Bruno Marchal

unread,
Jul 15, 2011, 4:39:48 AM7/15/11
to everyth...@googlegroups.com

On 14 Jul 2011, at 14:39, Craig Weinberg wrote:

I don't want to talk about inner experience. I want to talk about my fundamental reordering of the cosmos, which if it were correct, would be staggeringly important and I have not seen anywhere else:

  1. Mind and body are not merely separate, but perpendicular topologies of the same ontological continuum of sense.
Could you define "perpendicular topologies"? You say you don't study math, so why use mathematical terms (which seems non sensical for a mathematicians, unless you do a notion of set of topologies with some scalar products, but then you should give it.



  1. The interior of electromagnetism is sensorimotive, the interior of determinism is free will, and the interior of general relativity is perception.
What do you mean by interior of electromagnetism.


  1. Quantum Mechanics is a misinterpretation of atomic quorum sensing.
This seems like non sense.



  1. Time, space, and gravity are void. Their effects are explained by perceptual relativity and sensorimotor electromagnetism.
?


  1. The "speed of light" c is not a speed it's a condition of nonlocality or absolute velocity, representing a third state of physical relation as the opposite of both stillness and motion.
?



It's not about meticulous logical deduction, it's about grasping the largest, broadest description of the cosmos possible which doesn't leave anything out. I just want to see if this map flies, and if not, why not?


Anyway, you seem to presuppose some physicalness, and so by the UDA reasoning, you need a physics and a cognitive science with (very special) infinities. This seems to make the mind body problem (MB), and its formulation, artificially more complex, without motivation. Without an attempt to make things clearer I can hardly add anything. Perhaps understanding the MB problem in the comp context might help you to formulate it in some non-comp context.
 
Bruno



m.a.

unread,
Jul 15, 2011, 9:46:51 AM7/15/11
to everyth...@googlegroups.com
You should get work helping Rachel collect material. You'd be a natural.    m
 
 

Craig Weinberg

unread,
Jul 15, 2011, 11:19:03 AM7/15/11
to Everything List
>Could you define "perpendicular topologies"? You say you don't study
>math, so why use mathematical terms (which seems non sensical for a
>mathematicians, unless you do a notion of set of topologies with some
>scalar products, but then you should give it.

Yeah, I'm not sure if I mean it literally or figuratively. Maybe
better to say a pseudo-dualistic, involuted topological continuum?
Stephen was filling me in on some of the terminology. I'm looking at a
continuum of processes which range from discrete, [dense, public,
exterior, generic, a-signifying, literal...at the extreme would be
local existential stasis, fixed values, occidentialism (Only Material
Matter Matters)] to the compact [diffuse, private, interior,
proprietary, signifying, figurative...at the extreme would be non
local essential exstasis, orientalism (Anything Can Mean Everything)].
They are perpendicular because it's not as if there is a one to one
correspondence between each neuron and a single feeling, feelings are
chords of entangled sensorimotive events which extend well beyond the
nervous system.

Since the duality is polarized in every possible way, I want to make
it clear that to us, they appear perfectly opposite in their nature,
so I say perpendicular. Topology because it's a continuum with an XY
axis (Y being quantitative magnitude of literal scale on the
occidental side; size/scale, density, distance, and qualitative
magnitude on the oriental side; greatness/significance, intensity,
self-referentiality...these aren't an exhaustive list, I'm just
throwing out adjectives.). I'm not averse to studying the concepts of
mathematics, I'm just limited in how I can make sense of them and how
much I want to use them. I'm after more of an F=ma nugget of
simplicity than a fully explicated field equation. I want the most
elementary possible conception of what the cosmos seems to be.

>What do you mean by interior of electromagnetism.

The subjective correlate of all phenomena which we consider
electromagnetic. It could be more of an ontological interiority -
throughput.. I'm saying that energy is a flow of experiences contained
by the void of energy - and energy, all energy is change or difference
in what is sensed or intended. Negentropy. If there is no change in
what something experiences, there is no time. So it makes sense that
what we observe in the brain as being alterable with electromagnetism
translates as changes in sensorimotor experience.

>> Quantum Mechanics is a misinterpretation of atomic quorum sensing.
>This seems like non sense.

Didn't mean to be inflammatory there. What I mean to say is that the
popular layman's understanding of QM as how the microcosm works - the
Standard Model of literal particles in a vacuum with strange
behaviors, is inside out. What we are actually detecting is
particulate moods of sensorimotive events shared by our measuring
equipment (including ourselves) and the thing that we think is being
measured.

>>> Time, space, and gravity are void. Their effects are explained by
>> perceptual relativity and sensorimotor electromagnetism.

>?

Time is just the dialectic of change and the cumulative density of
it's own change residue carried forward. Space is just the
singularity's way of dividing itself existentially. If you have a
universe of one object, there is no space. Space is only the relation
of objects to each other. No relation, no space. Perceptual relativity
is meta-coherence, how multiple levels and scales of sensorimotor
electromagnetic patterns are recapitulated (again cumulative
entanglement...retention of pattern through iconicized
representation).

>> The "speed of light" c is not a speed it's a condition of
>> nonlocality or absolute velocity, representing a third state of
>> physical relation as the opposite of both stillness and motion.

>?
Stillness is a state which appears unchanging from the outside, and
from the inside the universe is changing infinitely fast. Motion is
the state of change relative to other phenomena, the faster you move
the more time slows down for you relative to other index phenomena. c
is the state of absolute change - being change+non change itself so
that it appears non-local from the outside, ubiquitous and absent, and
from the inside the cosmos is still.

Any better?

Craig

Bruno Marchal

unread,
Jul 17, 2011, 12:57:17 PM7/17/11
to everyth...@googlegroups.com

No it is worst, I'm afraid. I hope you don't mind when I am being
frank. In fundamental matter, you have to explain things from scratch.
Nothing can be taken for granted, and you have to put your assumptions
on the table, so that we avoid oblique comments and vocabulary
dispersion.
You say yourself that you don't know if you talk literally or
figuratively. That's says it all, I think. You should make a choice,
and work from there. Personally, I am a literalist, that is I am
applying the scientific method. That is, for the mind-body problem,
actually the hard part for scientist, consists in understanding that
once we assume the comp hyp, we can translate "philosophical problems"
into "mathematical and/or physical problems".
Philosophers don't like that (especially continental one), but this
fits with their usual tradition of defending academic territories and
position (food). It is natural, like in (pseudo)-religion, they are
not very happy when people use the scientific methods to invade their
fields of study.
But this means that, in interdisciplinary research, you must be able
to be understood by a majority in each field you are crossing. Even
when you are successful on this, you will have to find the people
having the courage to study the connection between the domains.
A lot of scientists still believe that notion like mind,
consciousness, are crackpot notion, and when sincere people try to
discuss on those notions, you can be amazed by the tons of
difficulties. I have nothing against some attempts toward a
materialist solution of the MB P., and in that case at least we know
(or should know, or refute ...) that we have to abandon even
extremally weak version of mechanism. But then, this looks like
introducing special (and unknown) infinities in the MB puzzle, so I am
not interested, without providing some key motivation.

In this list people are open minded for the "everything exists" type
of theories, like Everett Many-Worlds, with an open mind on
computationalism (Schmidhuber) and mathematicalism or immaterialism
(Tegmark). So my own contribution was well suited, given that I
propose an argument showing that if we believe that we can survive
with a digitalizable body, then we dispose, ONLY, of a, yet, very
solid constructive, and highly complex structured, version of an
"everything": all computations, (in the precise arithmetical sense of
sigma_1 arithmetical relations, and their (coded) proofs. I show also
that we dispose of a very natural notion of observers, the universal
machines, and that among them we can already "interview" those which
can prove, know, guess, feel about their internal views on realities.

Everett's move to embed the physicist subject *in* the object matter
of the physical equation (SWE) extends itself in the arithmetical
realm, with the embedding of the mathematician *in* arithmetic, once
we take the possibility of our local digitalization seriously enough
into consideration.

This shows mainly that, with comp, the mind-body problem is two times
more complex than what people usually think. Not only we have to
explain qualia/consciousness from the number, but we have to explain
quanta/matter from the numbers too.

But universal machine have a natural theory of thought (the laws of
Boole), and a natural theory of mind (the Gödel-Löb-Solovay logics of
self-reference), and by the very existence of computer science, in
fine, you get a translation of the body problem in computer science,
which makes it automatically a problem in number theory.

Bruno

> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/

Craig Weinberg

unread,
Jul 17, 2011, 6:54:40 PM7/17/11
to Everything List
>No it is worst, I'm afraid. I hope you don't mind when I am being
>frank. In fundamental matter, you have to explain things from scratch.
>Nothing can be taken for granted, and you have to put your assumptions
>on the table, so that we avoid oblique comments and vocabulary
>dispersion.

No, I don't mind frankness at all. I'm trying not to assume anything
if I can help it. I'm just correlating all common phenomena in the
cosmos in a simple form which focuses on their symmetry, and I think
accurately explains the relation of consciousness (or meta-perception,
which is meta-sensorimotive experience) to electromagnetic patterns in
the brain, and by extension, to explain Relativity as the
perceptibility of matter in general.

>You say yourself that you don't know if you talk literally or
>figuratively. That's says it all, I think. You should make a choice,
>and work from there.

It's not my intention to make a good theory, it's my intention to
describe the cosmos as it actually is. The cosmos is both literal and
figurative, and I believe it's quality of literalness and
figurativeness is part of the same continuum of objectivity-
subjectivity, discrete-compact, nihilistic existence-solipsistic
essence, etc. I don't know if it's useful to postulate a literal
topology when half of the continuum is figurative and experiential. It
seems like it would lead to a misunderstanding, but at the same time,
I believe that it is perpendicular ontologically just not in the sense
that the two topologies could be modeled in space as perpendicular
regions. One of the topologies is perpendicular to the idea of space
itself.

>This shows mainly that, with comp, the mind-body problem is two times
>more complex than what people usually think. Not only we have to
>explain qualia/consciousness from the number, but we have to explain
>quanta/matter from the numbers too.

I think the mind-body problem is resolved in my topology. It's simple.
Qualia and quanta are both elemental intersecting topologies which
meet on one end as maximally dimorphic (ie our ordinary, mundane
perception of subjective self vs external objects) and on the other
end as profoundly indistinguishable (quantum mechanics, shamanism
produce logical dualisms, monastic detachment). Qualia scales up as
perception, quanta scales up as relativity. They are the same meta
organizing principle: sensorimotive electromagnetism squaring itself.

Craig

Stathis Papaioannou

unread,
Jul 19, 2011, 7:26:41 AM7/19/11
to everyth...@googlegroups.com
On Thu, Jul 14, 2011 at 10:45 PM, Craig Weinberg <whats...@gmail.com> wrote:

>  It's not about whether other cells would sense the imposter neuron,
> it's
> about how much of an imposter the neuron is. If acts like a real cell
> in
> every physical way, if another organism can kill it and eat it
> and
> metabolize it completely then you pretty much have a cell. Whatever
> cannot
> be metabolized in that way is what potentially detracts from the
> ability to
> sustain consciousness. It's not your cells that need to sense DNA,
> it's the
> question of whether a brain composed entirely of, or significantly of
> cells
> lacking DNA would be conscious in the same way as a
> person.

DNA doesn't play a direct role in neuronal to neuronal interaction. It
is necessary for the synthesis of proteins, so without it the neuron
would be unable to, for example, produce more surface receptors or the
essential proteins needed for cell survival; however, if the DNA were
destroyed the neuron would carry on functioning as per usual for at
least a few minutes. Now, you speculate that consciousness may somehow
reside in the components of the neuron and not just in its function,
so that perhaps if the DNA were destroyed the consciousness would be
affected - let's say for the sake of simplicity that it too would be
destroyed - even in the period the neuron was functioning normally. If
that is so, then if all the neurons in your visual cortex were
stripped of their DNA you would be blind: your visual qualia would
disappear. But if all the neurons in your visual cortex continued to
function normally, they would send the normal signals to the rest of
your brain and the rest of your brain would behave as if you could
see: that is, you would accurately describe objects put in front of
your eyes and honestly believe that you had normal vision. So how
would this state, behaving as if you had normal vision and believing
you had normal vision, differ from actually having normal vision; or
to put it differently, how do you know that you aren't blind and
merely deluded about being able to see?


--
Stathis Papaioannou

Craig Weinberg

unread,
Jul 19, 2011, 6:13:35 PM7/19/11
to Everything List
I think there could be differences in how vision is perceived if all
of the visual cortex lacked DNA, even if the neurons of the cortex
exhibited superficial evidence of normal connectivity. A person could
be dissociated from the images they see, feeling them to be
meaningless or unreal, seen as if in third person or from malicious
phantom/alien eyeballs. Maybe it would be more subtle...a sensation of
otherhanded sight, or sight seeming to originate from a place behind
the ears rather than above the nose. The non-DNA vision could be
completely inaccessible to the conscious mind, a psychosomatic/
hysterical blindness, or perhaps the qualia would be different,
unburdened by DNA, colors could seem lighter, more saturated like a
dream. The possibilities are endless. The only way to find out is to
do experiments.

DNA may not play a direct role in neuronal to neuronal interaction,
but the same could be said of perception itself. We have nothing to
show that perception is the necessary result of neuronal interaction.
The same interactions could exist in a simulation without any kind of
perceived universe being created somewhere. Just because the behavior
of neurons correlates with perception doesn't mean that their behavior
alone causes perception. Materials matter. A TV set made out of
hamburger won't work.

What I'm trying to say is that the sensorimotive experience of matter
is not limited to the physical interior of each component of a cell or
molecule, but rather it is a completely other, synergistic topology
which is as diffuse and experiential as the component side is discrete
and observable. There is a functional correlation, but that's just
where the two topologies intersect. Many minor physical changes to the
brain can occur without any noticeable differences in perception -
sometimes major changes, injuries, etc. Major changes in the psyche
can occur without any physical precipitate - reading a book may
unleash a flood of neurotransmitters but the cause is semantic, not
biochemical.

What we don't know is what levels of our human experience are
essential and which ones may be vestigial or redundant. We don't know
what the qualitative content of the individual neuron signals are,
whether they contribute to a high level feeling upstream or whether
that contribution requires a low level experience to be amplified. If
a cell has no DNA, maybe it feels distress and that feeling is
amplified in the aggregate signals.

On Jul 19, 7:26 am, Stathis Papaioannou <stath...@gmail.com> wrote:

Jason Resch

unread,
Jul 19, 2011, 8:59:22 PM7/19/11
to everyth...@googlegroups.com
On Tue, Jul 19, 2011 at 5:13 PM, Craig Weinberg <whats...@gmail.com> wrote:
I think there could be differences in how vision is perceived if all
of the visual cortex lacked DNA, even if the neurons of the cortex
exhibited superficial evidence of normal connectivity. A person could
be dissociated from the images they see, feeling them to be
meaningless or unreal, seen as if in third person or from malicious
phantom/alien eyeballs. Maybe it would be more subtle...a sensation of
otherhanded sight, or sight seeming to originate from a place behind
the ears rather than above the nose. The non-DNA vision could be
completely inaccessible to the conscious mind, a psychosomatic/
hysterical blindness, or perhaps the qualia would be different,
unburdened by DNA, colors could seem lighter, more saturated like a
dream. The possibilities are endless. The only way to find out is to
do experiments.

So would the person dissociated from these images, or feeling them meaningless or unlreal, etc., ever report these different feelings?  Remember, nerves control movement of the vocal cords, if the neural network was unaffected and its operation remained the same all outwardly visible behavior would also be the same.  The person could not report any differences with their sense of vision, nor would other parts of their brain (such as those of thought, or introspection, etc.) have any indication that the nerves in the visual cortex has been modified (so long as they continued to send the right signals at the right times).

 

DNA may not play a direct role in neuronal to neuronal interaction,
but the same could be said of perception itself. We have nothing to
show that perception is the necessary result of neuronal interaction.

All inputs to the brain are the result of neuronal interaction, as are all outputs.  Neurons are affected by other neurons.

Now if I present an apple to a person, and I ask "What is this?" and the person reports "An apple." that is an example of perception. 

In theory, one could trace the nerve signals from the optic and auditory nerves all the way to the nerves controlling the vocal cords.  For perception to not be the result of neuronal interaction, you would need to find some point between the auditory and visual inputs and the verbal outputs where something besides other nerves are controlling or affecting the behavior of nerves.

Do you have any proposal for what this thing might be?
 
The same interactions could exist in a simulation without any kind of
perceived universe being created somewhere. Just because the behavior
of neurons correlates with perception doesn't mean that their behavior
alone causes perception. Materials matter. A TV set made out of
hamburger won't work.

Humans can make TV sets using cathode ray tubes, liquid crystal displays, projection screens, plasma display panels, and so on.  Obviously material does not matter for making a TV set, what is important is the functions and behaviors of the components.  So long as the components allow emission of light at certain frequencies at specific locations on a grid it could be used to construct a television set.
 

What I'm trying to say is that the sensorimotive experience of matter
is not limited to the physical interior of each component of a cell or
molecule, but rather it is a completely other, synergistic topology
which is as diffuse and experiential as the component side is discrete
and observable. There is a functional correlation, but that's just
where the two topologies intersect. Many minor physical changes to the
brain can occur without any noticeable differences in perception -
sometimes major changes, injuries, etc. Major changes in the psyche
can occur without any physical precipitate - reading a book may
unleash a flood of neurotransmitters but the cause is semantic, not
biochemical.

The idea that two functionally equivalent minds made out of different material could determine a difference is contrary to the near universally accepted Church-Turing thesis.  A result of the thesis is that it is not possible for a process to determine its ultimate implementation.  This is the technology that allows one to play old atari or nintendo games on modern PCs, despite the completely different hardware and architecture.  From the perspective of the old Nintendo game, it is running on a Nintendo console, it has no way to determine it is running on a Dell laptop running Windows.  Similarly, if the mind is a process, it in principle, has no way of know whether it is implemented by a wet brain, or a cluster of super computers.


Jason

Craig Weinberg

unread,
Jul 20, 2011, 8:08:07 AM7/20/11
to Everything List
>So would the person dissociated from these images, or feeling them
>meaningless or unlreal, etc., ever report these different feelings?
>Remember, nerves control movement of the vocal cords, if the neural network
>was unaffected and its operation remained the same all outwardly visible
>behavior would also be the same. The person could not report any
>differences with their sense of vision, nor would other parts of their brain
>(such as those of thought, or introspection, etc.) have any indication that
>the nerves in the visual cortex has been modified (so long as they continued
>to send the right signals at the right times).

I'm saying that without DNA in the neurons, or something which
functions exactly as DNA, it may not be possible to satisfy the given
that the neural network is unaffected. It's all a matter of what the
substitution level is. If you replaced water with heavy water, it's
not exactly the same thing. If you have something that acts like water
in all ways, it's nothing but water. If you have a brain made of
neurons that are not neurons, you have something other than a brain to
one degree or another, depending on the exact difference. If you are
stating as a given that there is no difference between the replacement
brain from a biological brain, then the replacement brain is nothing
but a biological brain.

>All inputs to the brain are the result of neuronal interaction, as are all
>outputs. Neurons are affected by other neurons.
>

I think that 'the brain' is neuronal interaction (and intracellular
interaction, molecular interaction). It's inputs and outputs are with
the outside world of physical sense and the inside world of semantic
sense. The brain is the abacus, storing, changing, and organizing
patterns, but the experience is felt through the brain, not as a
consequence of the brain's functionality. The functionality of course
determines access to what patterns can be accessed from the exterior
by the interior and vice versa, but it is the interior sense of the
brain as a whole which is the user(s) of the computer.

>Now if I present an apple to a person, and I ask "What is this?" and the
>person reports "An apple." that is an example of perception.
>
>In theory, one could trace the nerve signals from the optic and auditory
>nerves all the way to the nerves controlling the vocal cords. For
>perception to not be the result of neuronal interaction, you would need to
>find some point between the auditory and visual inputs and the verbal
>outputs where something besides other nerves are controlling or affecting
>the behavior of nerves.

The perception is the result of the apple first. Of the properties of
the universe which allow sense to be propagated from apple to optic
nerve to visual cortex. From the outside looking in, perception is
incredibly complex. From the inside looking out, it's very simple.
Pain is simple. We are complex so our pain is mechanically achieved in
a relatively complex way, but any living organism probably has some
version of a pain-like experience. It's as elemental as ATP or DNA. We
can't observe it from the outside of course, because the interior
universe is inumerable private reality tunnels; the polar opposite of
the public unified topology of the exterior.

>Humans can make TV sets using cathode ray tubes, liquid crystal displays,
>projection screens, plasma display panels, and so on. Obviously material
>does not matter for making a TV set, what is important is the functions and
>behaviors of the components. So long as the components allow emission of
>light at certain frequencies at specific locations on a grid it could be
>used to construct a television set.

Of course material matters. There is a narrow range of materials which
we can feasibly make a TV set out of. We can't make a TV set out of
hamburger because hamburger cannot be made into components that do the
same thing as semiconductors. You're also conflating TV set with any
two dimensional display, which is not what we're talking about. We
very well could genetically engineer a brain, or biologically engineer
a brain, but I'm saying that we cannot semiotically engineer a brain
out of inorganic matter and expect it to be able to feel what
organisms feel. It's just going to be a sculpture of a brain that
behaves like a brain from the outside, but it can only play DVDs for
us. It has no user.

Craig

On Jul 19, 8:59 pm, Jason Resch <jasonre...@gmail.com> wrote:

Stathis Papaioannou

unread,
Jul 20, 2011, 9:09:41 AM7/20/11
to everyth...@googlegroups.com
On Wed, Jul 20, 2011 at 10:08 PM, Craig Weinberg <whats...@gmail.com> wrote:
>>So would the person dissociated from these images, or feeling them
>>meaningless or unlreal, etc., ever report these different feelings?
>>Remember, nerves control movement of the vocal cords, if the neural network
>>was unaffected and its operation remained the same all outwardly visible
>>behavior would also be the same.  The person could not report any
>>differences with their sense of vision, nor would other parts of their brain
>>(such as those of thought, or introspection, etc.) have any indication that
>>the nerves in the visual cortex has been modified (so long as they continued
>>to send the right signals at the right times).
>
> I'm saying that without DNA in the neurons, or something which
> functions exactly as DNA, it may not be possible to satisfy the given
> that the neural network is unaffected. It's all a matter of what the
> substitution level is. If you replaced water with heavy water, it's
> not exactly the same thing. If you have something that acts like water
> in all ways, it's nothing but water. If you have a brain made of
> neurons that are not neurons, you have something other than a brain to
> one degree or another, depending on the exact difference. If you are
> stating as a given that there is no difference between the replacement
> brain from a biological brain, then the replacement brain is nothing
> but a biological brain.

The requirement is that the artificial neurons interact with the
biological neurons in the normal way, so that the biological neurons
can't tell that they are imposters. This is a less stringent
requirement than making artificial neurons that are indistinguishable
from biological neurons under any test whatsoever. In the example I
gave before, removing the DNA from a neuron would at least for a few
minutes continue behaving normally so the surrounding neurons would
not detect that anything had changed, whereas an electron micrograph
might easily show the difference.


-- Stathis Papaioannou

Craig Weinberg

unread,
Jul 20, 2011, 2:40:50 PM7/20/11
to Everything List
Chickens can walk around for a while without a head also. It doesn't
mean that air is a viable substitute for a head, and it doesn't mean
that the head isn't producing a different quality of awareness than it
does under typical non-mortally wounded conditions.

On Jul 20, 9:09 am, Stathis Papaioannou <stath...@gmail.com> wrote:

meekerdb

unread,
Jul 20, 2011, 3:07:41 PM7/20/11
to everyth...@googlegroups.com
On 7/20/2011 11:40 AM, Craig Weinberg wrote:
> Chickens can walk around for a while without a head also. It doesn't
> mean that air is a viable substitute for a head, and it doesn't mean
> that the head isn't producing a different quality of awareness than it
> does under typical non-mortally wounded conditions.
>
>

No, but it means the chicken head isn't necessary to walking - just like
DNA isn't necessary to consciousness.

Brent

Craig Weinberg

unread,
Jul 20, 2011, 5:59:49 PM7/20/11
to Everything List
What does consciousness require?

meekerdb

unread,
Jul 20, 2011, 6:14:53 PM7/20/11
to everyth...@googlegroups.com
On 7/20/2011 2:59 PM, Craig Weinberg wrote:
> What does consciousness require?
>

Interaction with the world. Information processing. Memory. A point
of view; i.e. model of the world including self. Purpose/values.

Brent

Stathis Papaioannou

unread,
Jul 20, 2011, 6:58:49 PM7/20/11
to everyth...@googlegroups.com
On Thu, Jul 21, 2011 at 4:40 AM, Craig Weinberg <whats...@gmail.com> wrote:
> Chickens can walk around for a while without a head also. It doesn't
> mean that air is a viable substitute for a head, and it doesn't mean
> that the head isn't producing a different quality of awareness than it
> does under typical non-mortally wounded conditions.

I think you have failed to address the point made by several people so
far, which is that if the replacement neurons can interact with the
remaining biological neurons in a normal way, then it is not possible
for there to be a change in consciousness. The important thing is
**behaviour of the replacement neurons from the point of view of the
biological neurons**.


--
Stathis Papaioannou

Craig Weinberg

unread,
Jul 20, 2011, 7:33:48 PM7/20/11
to Everything List
Sounds like a fancy cash register to me.

Craig Weinberg

unread,
Jul 20, 2011, 7:44:15 PM7/20/11
to Everything List
Since it's not possible to know what the point of view of biological
neurons would be, we can't rule out the contents of the cell. You
can't presume to know that behavior is independent of context. If you
consider the opposite scenario, at what point do you consider a
microelectronic configuration conscious? How many biological neurons
does it take added to a computer before it has it's own agenda?

On Jul 20, 6:58 pm, Stathis Papaioannou <stath...@gmail.com> wrote:

Craig Weinberg

unread,
Jul 20, 2011, 7:51:17 PM7/20/11
to Everything List
Or, imagine you were to replace a city with empty cars that drive the
streets following sophisticated models of urban traffic. Is a group of
empty buildings that produce empty cars which drive around the streets
convincingly a city?

On Jul 20, 6:58 pm, Stathis Papaioannou <stath...@gmail.com> wrote:

meekerdb

unread,
Jul 20, 2011, 9:02:47 PM7/20/11