RE: bruno list

47 views
Skip to first unread message

Jesse Mazer

unread,
Jul 13, 2011, 1:23:30 PM7/13/11
to everyth...@googlegroups.com

Craig Weinberg wrote:

>It's weird, I get an error when I try to reply in any way to your last post. Here's what I'm trying to Reply:

>The crux of the whole issue is what we mean by functionally indistinguishable.

But I specified what I meant (and what I presume Chalmers meant)--that any physical influences such as neurotransmitters that other neurons respond to (in terms of the timing of their own electrochemical pulses, and the growth and death of their synapses) are still emitted by the substitute, so that the other neurons "can't tell the difference" and their behavior is unchanged from what it would be if the neuron hadn't been replaced by an artificial substitute.

>If you aren't talking about silicon chips or digital simulation, then you are talking about a different level of function. Would your artificial >neuron synthesize neurotransmitters, detect and respond to neurotransmitters, even emulate genetics?

I said that it would emit neurotransmitters--whether it synthesized them internally or had a supply that was periodically replenished by nanobots or something is irrelevant. Again, all that matters is that the *outputs* that influence other neurons are just like those of a real neuron, any *internal* processes in the substitute are just supposed to be artificial simulations of what goes on in a real neuron, so there might be simulated genes (in a simulation running on something like a silicon chip or other future computing technology) but there'd be no need for actual DNA molecules inside the substitute.

>If you get down to the level of the pseudobiological, then the odds of being able to replace neurons successfully gets much higher to >me. To me, that's not what functionalism is about though. I think of functionalism as confidence in a more superficial neural network >simulation of logical nodes. Virtual consciousness. 

I don't think functionalism means confidence that the extremely simplified "nodes" of most modern neural networks would be sufficient for a simulated brain that behaved just like a real one, it might well be that much more detailed simulations of individual neurons would be needed for mind uploading. The idea is just that *some* sufficiently detailed digital simulation would behave just like real neurons and a real brain, and "functionalism" as a philosophical view says that this simulation would have the same mental properties (such as qualia, if the functionalist thinks of "qualia" as something more than just a name for a certain type of physical process) as the original brain (see the first sentence defining 'functionalism' at http://plato.stanford.edu/entries/functionalism/ )

>If you're going to get down to the biological substitution level of emulating the tissue itself so that the tissue is biologically >indistinguishable from brain tissue, but maybe has some plastic or whatever instead of cytoplasm, then sure, that might work. As long >as you've got real DNA, real ions, real sensitivity to real neurotransmitters, then yeah that could work.

No, that's not what I'm talking about. Everything internal to the boundary of the neuron is simulated, possibly using materials that have no resemblance to biological ones. But all the relevant molecules and electromagnetic waves which leave the boundary (and which are relevant to the behavior of other neurons, so for example visible light waves probably don't need to be included) of the original neuron are still emitted by the artificial substitute, like neurotransmitters. 

As I said, a reductionist should believe that the behavior of a complex system is in principle explainable as nothing more than the sum of all the interactions of its parts. And if the reductionist grants that at the scale of neurons, entanglement isn't relevant to how they interact (because of decoherence), then we should be able to assume that the behavior of the system is a sum of *local* interactions between particles that are close to one another in space. So if we divide a large system into a bunch of small volumes, the only way processes happening within one volume can have any causal influence on processes happening within a second adjacent volume is via local interactions that happen at the *boundary* between the two volumes, or particles passing through this boundary which later interact with others inside the second volume. So if you replace the inside of one volume with a very different system that nevertheless emits the same pattern of particles at the boundary of the volume, systems in other adjacent volumes "don't know the difference" and their behavior is unaffected. You didn't address my question about whether you agree or disagree with physical reductionism in my last post, can you please do that in your next response to me?

>>You can simulate the large-scale behavior of water using only the basic quantum laws that govern interactions between the charged >>particles that make up the atoms in each water molecule-

>Simulating the behavior of water isn't the same thing as being able to create synthetic water. If you are starving, watching a movie >that explains a roast beef sandwich doesn't help you. Why would consciousness be any different?

Because I'm just talking about the behavioral aspects of consciousness now, since it's not clear if you actually accept or reject the premise that it would be possible to replace neurons with functional equivalents that would leave *behavior* unaffected (both the behavior of other nearby neurons, and behavior of the whole person in the form of muscle movement triggered by neural signals, including speech about what the person was feeling). If you do accept that premise, then we can move on to Chalmers' argument about the implausibility of dancing/fading qualia in situations where behavior is completely unaffected--you also have not really given a clear answer to the question of whether you think there could be situations where behavior is completely unaffected but qualia are changing or fading. But one thing at a time, first I want to focus on this issue of whether you accept that in principle it would be possible to replace neurons with "functional equivalents" which emit the same signals to other neurons but have a totally different internal structure, and whether you accept that this would leave behavior unchanged, both for nearby neurons and the muscle movements of the body as a whole.

>If you replaced the log in your fireplace with a fluorescent tube, it's not going to be the functional equivalent of fire if you are freezing >in the winter. The problem with consciousness is that we don't know which functions, if any, make the difference between the >possibility of consciousness or not. I see our human consciousness as an elaboration of animal experience, so that anything that can >emulate human consciousness must be able to feel like an animal, which means feeling like you are made of meat that wants to eat, >fuck, kill, run, sleep, and avoid pain.

Again, not talking about consciousness at the moment, just behaviors that we associate with consciousness. That's why, in answer to your question about synthetic water, I imagined a robot whose limb movements depend on the motions of water in an internal tank, and pointed out that if you replaced the tank with a sufficiently good simulation, the external limb movements of the robot shouldn't be any different. 

>>I don't see why that follows, we don't see darwinian evolution in non-organic systems either but that doesn't prove that darwinian >>evolution somehow requires something more than just a physical system with the right type of organization (basically a system that >>can self-replicate, and which has the right sort of stable structure to preserve hereditary information to a high degree but also with >>enough instability for "mutations" in this information from one generation to the next)

>If we can make an inorganic material that can self-replicate, mutate, and die, then it stands more of a chance to be able to develop >it's detection into something like sensation then feeling, thinking, morality, etc. There must be some reason why it doesn't happen >naturally after 4 billion years here, so I suspect that reinventing it won't be worth the trouble. Why not just use organic molecules >instead?

I don't really want to get into the general question of the advantages and disadvantages of trying to have darwinian evolution in non-organic systems, I was just addressing your specific claim that if consciousness is just a matter of organization we should expect to see it already in non-organic systems. My point was that if you agree that the basic notion of "Darwinian evolution" is purely a matter of organization and not the details of what a system is made of (Do you in fact agree with that? Regardless of whether it might be *easier* to implement Darwinian evolution in an organic system, hopefully you wouldn't say it's in-principle impossible to implement self-replication with heredity and mutation in a non-organic system?), then it's clear that in general it cannot be true that "Feature X which we see in organic systems is purely a matter of organization" implies "We should expect to see natural examples of Feature X in non-organic systems as well". 

Jesse


On Jul 12, 8:36 pm, Jesse Mazer <laserma...@hotmail.com> wrote:
> > Date: Tue, 12 Jul 2011 15:50:12 -0700
> > Subject: Re: Bruno's blasphemy.
> > From: whatsons...@gmail.com
> > To: everyth...@googlegroups.com
>
> > Thanks, I always seem to like Chalmers perspectives. In this case I
> > think that the hypothesis of physics I'm working from changes how I
> > see this argument compared to how I would have a couple years ago. My
> > thought now is that although organizational invariance is valid,
> > molecular structure is part of the organization. I think that
> > consciousness is not so much a phenomenon that is produced, but an
> > essential property that is accessed in different ways through
> > different organizations.
>
> But how does this address the thought-experiment? If each neuron were indeed replaced one by one by a functionally indistinguishable substitute, do you think the qualia would change somehow without the person's behavior changing in any way, so they still maintained that they noticed no differences?
>
>
>
> > I'll just throw out some thoughts:
>
> > If you take an MRI of a silicon brain, it's going to look nothing like
> > a human brain. If an MRI can tell the difference, why can't the brain
> > itself?
>
> Because neurons (including those controlling muscles) don't see each other visually, they only "sense" one another by certain information channels such as neurotransmitter molecules which go from one neuron to another at the synaptic gap. So if the artificial substitutes gave all the same type of outputs that other neurons could sense, like sending neurotransmitter molecules to other neurons (and perhaps other influences like creating electromagnetic fields which would affect action potentials traveling along nearby neurons), then the system as a whole should behave identically in terms of neural outputs to muscles (including speech acts reporting inner sensations of color and whether or not the qualia are "dancing" or remaining constant), even if some other system that can sense information about neurons that neurons themselves cannot (like a brain scan which can show something about the material or even shape of neurons) could tell the difference.
>
> > Can you make synthetic water? Why not?
>
> You can simulate the large-scale behavior of water using only the basic quantum laws that govern interactions between the charged particles that make up the atoms in each water molecule--seehttp://www.udel.edu/PR/UDaily/2007/mar/water030207.htmlfor a discussion. If you had a robot whose external behavior was somehow determined by the behavior of water in an internal hidden tank (say it had some scanners watching the motion of water in that tank, and the scanners would send signals to the robotic limbs based on what they saw), then the external behavior of the robot should be unchanged if you replaced the actual water tank with a sufficiently detailed simulation of a water tank of that size.
>
> > If consciousness is purely organizational, shouldn't we see an example
> > of non-living consciousness in nature? (Maybe we do but why don't we
> > recognize it as such). At least we should see an example of an
> > inorganic organism.
>
> I don't see why that follows, we don't see darwinian evolution in non-organic systems either but that doesn't prove that darwinian evolution somehow requires something more than just a physical system with the right type of organization (basically a system that can self-replicate, and which has the right sort of stable structure to preserve hereditary information to a high degree but also with enough instability for "mutations" in this information from one generation to the next). In fact I think most scientists would agree that intelligent purposeful and flexible behavior must have something to do with darwinian or quasi-darwinian processes in the brain (quasi-darwinian to cover something like the way an ant colony selects the best paths to food, which does involve throwing up a lot of variants and then creating new variants closer to successful ones, but doesn't really involve anything directly analogous to "genes" or self-replication of scent trails). That said, since I am philosophically inclined towards monism I do lean towards the idea that perhaps all physical processes might be associated with some very "basic" form of qualia, even if the sort of complex, differentiated and meaningful qualia we experience are only possible in adaptive systems like the brain (chalmers discusses this sort of panpsychist idea in his book "The Conscious Mind", and there's also a discussion of "naturalistic panpsychism" athttp://www.hedweb.com/lockwood.htm#naturalistic)
>
>
>
> > My view of awareness is now subtractive and holographic (think pinhole
> > camera), so that I would read fading qualia in a different way. More
> > like dementia.. attenuating connectivity between different aspects of
> > the self, not changing qualia necessarily. The brain might respond to
> > the implanted chips, even ruling out organic rejection, the native
> > neurology may strengthen it's remaining connections and attempt to
> > compensate for the implants with neuroplasticity, routing around the
> > 'damage'.
>
> But here you seem to be rejecting the basic premise of Chalmers' thought experiment, which supposes that one could replace neurons with *functionally* indistinguishable substitutes, so that the externally-observable behavior of other nearby neurons would be no different from what it would be if the neurons hadn't been replaced. If you accept physical reductionism--the idea that the external behavior (as opposed to inner qualia) of any physical system is in principle always reducible to the interactions of all its basic components such as subatomic particles, interacting according to the same universal laws (like how the behavior of a collection of water molecules can be reduced to the interaction of all the individual charged particles obeying basic quantum laws)--then it seems to me you should accept that as long as an artificial neuron created the same physical "outputs" as the neuron it replaced (such as neurotransmitter molecules and electromagnetic fields), then the behavior of surrounding neurons should be unaffected. If you object to physical reductionism, or if you don't object to it but somehow still reject the idea that it would be possible to predict a real neuron's "outputs" with a computer simulation, or reject the idea that as long as the outputs at the boundary of the original neuron were unchanged the other neurons wouldn't behave any differently, please make it clear so I can understand what specific premise of Chalmers' thought-experiment you are rejecting.
> Jesse                                   

Craig Weinberg

unread,
Jul 13, 2011, 8:04:19 PM7/13/11
to Everything List
>Again, all that matters is that the *outputs* that influence other neurons are just like those of a real neuron, any *internal* processes in the substitute are just supposed to be >artificial simulations of what goes on in a real neuron, so there might be simulated genes (in a simulation running on something like a silicon chip or other future computing >technology) but there'd be no need for actual DNA molecules inside the substitute.

The assumption is that there is a meaningful difference between the
processes physically within the cell and those that are input and
output between the cells. That is not my view. Just as the glowing
blue chair you are imagining now (is it a recliner? A futuristic
cartoon?) is not physically present in any neuron or group of neurons
in your skull - under any imaging system or magnification. My idea of
'interior' is different from the physical inside of the cell body of a
neuron. It is the interior topology. It's not even a place, it's just
a sensorimotive awareness of itself and it's surroundings - hanging on
to it's neighbors, reaching out to connect, expanding and contracting
with the mood of the collective. This is what consciousness is. This
is who we are. The closer you get to the exact nature of the neuron,
the closer you get to human consciousness. If you insist upon using
inorganic materials, that really limits the degree to which the
feelings it can host will be similar. Why wouldn't you need DNA to
feel like something based on DNA in practically every one of it's
cells?

>The idea is just that *some* sufficiently detailed digital simulation would behave just like real neurons and a real brain, and "functionalism" as a philosophical view says that this >simulation would have the same mental properties (such as qualia, if the functionalist thinks of "qualia" as something more than just a name for a certain type of physical process) >as the original brain

A digital simulation is just a pattern in an abacus. If you've got a
gigantic abacus and a helicopter, you can make something that looks
like whatever you want it to look like from a distance, but it's still
just an abacus. It has no subjectivity beyond the physical materials
that make up the beads.

>Everything internal to the boundary of the neuron is simulated, possibly using materials that have no resemblance to biological ones.

It's a dynamic system, there is no boundary like that. The
neurotransmitters are produced by and received within the neurons
themselves. If something produces and metabolizes biological
molecules, then it is functioning at a biochemical level and not at
the level of a digital electronic simulation. If you have a heat sink
for your device it's electromotive. If you have an insulin pump it's
biological, if you have a serotonin reuptake receptor, it's
neurological.

>So if you replace the inside of one volume with a very different system that nevertheless emits the same pattern of particles at the boundary of the volume, systems in other >adjacent volumes "don't know the difference" and their behavior is unaffected.

No, I don't that's how living things work. Remember that people's
bodies often reject living tissue transplanted from other human
beings.

>You didn't address my question about whether you agree or disagree with physical reductionism in my last post, can you please do that in your next response to me?

I agree with physical reductionism as far as the physical side of
things is concerned. Qualia is the opposite that would be subject to
experiential irreductionism. Which is why you can print Shakespeare on
a poster or a fortune cookie and it's still Shakeapeare, but you can't
make enriched uranium out of corned beef or a human brain out of table
salt.

>Because I'm just talking about the behavioral aspects of consciousness now, since it's not clear if you actually accept or reject the premise that it would be possible to replace >neurons with functional equivalents that would leave *behavior* unaffected

I'm rejecting the premise that there is a such thing as a functional
replacement for a neuron that is sufficiently different from a neuron
that it would matter. You can make a prosthetic appliance which your
nervous system will make do with, but it can't replace the nervous
system altogether. The nervous system predicts and guesses. It can
route around damage or utilize a device which it can understand how to
use.

>first I want to focus on this issue of whether you accept that in principle it would be possible to replace neurons with "functional equivalents" which emit the same signals to other >neurons but have a totally different internal structure, and whether you accept that this would leave behavior unchanged, both for nearby neurons and the muscle movements of the >body as a whole.

This is tautological. You are making a nonsense distinction between
it's 'internal' structure and what it does. If the internal structure
is equivalent enough, then it will be functionally equivalent to other
neurons and the organism at large. If it's not, then it won't be.
Interior mechanics that produce organic molecules and absorb them
through a semipermeable membrane are biological cells. If you can make
something that does that out of something other than nucleic acids,
then cool, but why bother? Just build the cell you want
nanotechnologically.

>Again, not talking about consciousness at the moment, just behaviors that we associate with consciousness. That's why, in answer to your question about synthetic water, I >imagined a robot whose limb movements depend on the motions of water in an internal tank, and pointed out that if you replaced the tank with a sufficiently good simulation, the >external limb movements of the robot shouldn't be any different.

If you are interested in the behaviors of consciousness only, all you
have to do is watch a youtube and you will see a simulated
consciousness behaving. Can you produce something that acts like it's
conscious? Of course.

>My point was that if you agree that the basic notion of "Darwinian evolution" is purely a matter of organization and not the details of what a system is made of (Do you in fact agree >with that? Regardless of whether it might be *easier* to implement Darwinian evolution in an organic system, hopefully you wouldn't say it's in-principle impossible to implement >self-replication with heredity and mutation in a non-organic system?), then it's clear that in general it cannot be true that "Feature X which we see in organic systems is purely a >matter of organization" implies "We should expect to see natural examples of Feature X in non-organic systems as well".

It's a false equivalence. Darwinian evolution is a relational
abstraction and consciousness or life is a concrete experience. The
fact that we can call anything which follows a statistical pattern of
iterative selection 'Darwinian evolution' just means that it is a
basic relation of self-replicating elements in a dynamic mechanical
system. That living matter and consciousness only appears out of a
particular recipe of organic molecules doesn't mean that there can't
be another recipe, however it does tend to support the observation
that life and consciousness is made out of some things and not others,
and certainly it supports that it is not likely a phenomenon which can
be produced by combinations of anything physical, let alone something
purely computational.

On Jul 13, 1:23 pm, Jesse Mazer <laserma...@hotmail.com> wrote:
> Craig Weinberg wrote:
> >It's weird, I get an error when I try to reply in any way to your last post. Here's what I'm trying to Reply:
> >The crux of the whole issue is what we mean by functionally indistinguishable.
>
> But I specified what I meant (and what I presume Chalmers meant)--that any physical influences such as neurotransmitters that other neurons respond to (in terms of the timing of their own electrochemical pulses, and the growth and death of their synapses) are still emitted by the substitute, so that the other neurons "can't tell the difference" and their behavior is unchanged from what it would be if the neuron hadn't been replaced by an artificial substitute.>If you aren't talking about silicon chips or digital simulation, then you are talking about a different level of function. Would your artificial >neuron synthesize neurotransmitters, detect and respond to neurotransmitters, even emulate genetics?
>
> I said that it would emit neurotransmitters--whether it synthesized them internally or had a supply that was periodically replenished by nanobots or something is irrelevant. Again, all that matters is that the *outputs* that influence other neurons are just like those of a real neuron, any *internal* processes in the substitute are just supposed to be artificial simulations of what goes on in a real neuron, so there might be simulated genes (in a simulation running on something like a silicon chip or other future computing technology) but there'd be no need for actual DNA molecules inside the substitute.>If you get down to the level of the pseudobiological, then the odds of being able to replace neurons successfully gets much higher to >me. To me, that's not what functionalism is about though. I think of functionalism as confidence in a more superficial neural network >simulation of logical nodes. Virtual consciousness.
>
> I don't think functionalism means confidence that the extremely simplified "nodes" of most modern neural networks would be sufficient for a simulated brain that behaved just like a real one, it might well be that much more detailed simulations of individual neurons would be needed for mind uploading. The idea is just that *some* sufficiently detailed digital simulation would behave just like real neurons and a real brain, and "functionalism" as a philosophical view says that this simulation would have the same mental properties (such as qualia, if the functionalist thinks of "qualia" as something more than just a name for a certain type of physical process) as the original brain (see the first sentence defining 'functionalism' athttp://plato.stanford.edu/entries/functionalism/)>If you're going to get down to the biological substitution level of emulating the tissue itself so that the tissue is biologically >indistinguishable from brain tissue, but maybe has some plastic or whatever instead of cytoplasm, then sure, that might work. As long >as you've got real DNA, real ions, real sensitivity to real neurotransmitters, then yeah that could work.
> > You can simulate the large-scale behavior of water using only the basic quantum laws that govern interactions between the charged particles that make up the atoms in each water molecule--seehttp://www.udel.edu/PR/UDaily/2007/mar/water030207.htmlfora discussion. If you had a robot whose external behavior was somehow determined by the behavior of water in an internal hidden tank (say it had some scanners watching the motion of water in that tank, and the scanners would send signals to the robotic limbs based on what they saw), then the external behavior of the robot should be unchanged if you replaced the actual water tank with a sufficiently detailed simulation of a water tank of that size.

Jason Resch

unread,
Jul 13, 2011, 9:16:56 PM7/13/11
to everyth...@googlegroups.com

On Jul 13, 2011, at 7:04 PM, Craig Weinberg <whats...@gmail.com>
wrote:

>> Again, all that matters is that the *outputs* that influence other
>> neurons are just like those of a real neuron, any *internal*
>> processes in the substitute are just supposed to be >artificial
>> simulations of what goes on in a real neuron, so there might be
>> simulated genes (in a simulation running on something like a
>> silicon chip or other future computing >technology) but there'd be
>> no need for actual DNA molecules inside the substitute.
>
> The assumption is that there is a meaningful difference between the
> processes physically within the cell and those that are input and
> output between the cells. That is not my view. Just as the glowing
> blue chair you are imagining now (is it a recliner? A futuristic
> cartoon?) is not physically present in any neuron or group of neurons
> in your skull -

If it is not present physically, then what causes a person to say "I
am imagining a blue chair"?

> under any imaging system or magnification. My idea of
> 'interior' is different from the physical inside of the cell body of a
> neuron. It is the interior topology. It's not even a place, it's just
> a sensorimotive

Could you please define this term? I looked it up but the
definitions I found did not seem to fit.

> awareness of itself and it's surroundings - hanging on
> to it's neighbors, reaching out to connect, expanding and contracting
> with the mood of the collective. This is what consciousness is. This
> is who we are. The closer you get to the exact nature of the neuron,
> the closer you get to human consciousness.

There is such a thing as too low a level. What leads you to believe
the neuron is the appropriate level to find qualia, rather than the
states of neuron groups or the whole brain? Taking the opposite
direction, why not say it must be explained in terms if chemistry or
quarks? What led you to conclude it is the neurons? Afterall, are
rat neurons very different from human neurons? Do rats have the same
range of qualia as we?

> If you insist upon using
> inorganic materials, that really limits the degree to which the
> feelings it can host will be similar.

Assuming qualia supervene on the individual cells or their chemistry.

> Why wouldn't you need DNA to
> feel like something based on DNA in practically every one of it's
> cells?

You would have to show that the presence of DNA in part determines the
evolution of the brains neural network. If not, it is as relevant to
you and your mind as the neutrinos passing through you.

>
>
>> The idea is just that *some* sufficiently detailed digital
>> simulation would behave just like real neurons and a real brain,
>> and "functionalism" as a philosophical view says that this
>> >simulation would have the same mental properties (such as qualia,
>> if the functionalist thinks of "qualia" as something more than just
>> a name for a certain type of physical process) >as the original brain
>
> A digital simulation is just a pattern in an abacus.

The state of an abacus is just a number, not a process. I think you
may not have a full understanding of the differences between a turing
machine and a string of bits. A Turing machine can mimick any process
that is defineable and does not take an infinite number of steps.
Turing machines are dynamic, self-directed entities. This
distinguishes them from cartoons, YouTube videos and the state if an
abacus.

Since they have such a universal capability to mimick processes, then
the idea that the brain is a process leads naturally to the idea of
intelligent computers which could function identically to organic
brains.

Then, if you deny the logical possibilitt of zombies, or fading
qualia, you must accept such an emulation of a human mind would be
equally conscious.

> If you've got a
> gigantic abacus and a helicopter, you can make something that looks
> like whatever you want it to look like from a distance, but it's still
> just an abacus. It has no subjectivity beyond the physical materials
> that make up the beads.

The idea behind a computer simulation of a mind is not to make
something that looks like a brain but to make something that behaves
and works like a brain.

>
>
>> Everything internal to the boundary of the neuron is simulated,
>> possibly using materials that have no resemblance to biological ones.
>
> It's a dynamic system,

So is a turing machine.

> there is no boundary like that. The
> neurotransmitters are produced by and received within the neurons
> themselves. If something produces and metabolizes biological
> molecules, then it is functioning at a biochemical level and not at
> the level of a digital electronic simulation. If you have a heat sink
> for your device it's electromotive. If you have an insulin pump it's
> biological, if you have a serotonin reuptake receptor, it's
> neurological.
>
>> So if you replace the inside of one volume with a very different
>> system that nevertheless emits the same pattern of particles at the
>> boundary of the volume, systems in other >adjacent volumes "don't
>> know the difference" and their behavior is unaffected.
>
> No, I don't that's how living things work. Remember that people's
> bodies often reject living tissue transplanted from other human
> beings.

Rejection requires the body knowing there is a difference, which is
against the starting assumption.

>
>
>> You didn't address my question about whether you agree or disagree
>> with physical reductionism in my last post, can you please do that
>> in your next response to me?
>
> I agree with physical reductionism as far as the physical side of
> things is concerned. Qualia is the opposite that would be subject to
> experiential irreductionism. Which is why you can print Shakespeare on
> a poster or a fortune cookie and it's still Shakeapeare, but you can't
> make enriched uranium out of corned beef or a human brain out of table
> salt.
>
>> Because I'm just talking about the behavioral aspects of
>> consciousness now, since it's not clear if you actually accept or
>> reject the premise that it would be possible to replace >neurons
>> with functional equivalents that would leave *behavior* unaffected
>
> I'm rejecting the premise that there is a such thing as a functional
> replacement for a neuron that is sufficiently different from a neuron
> that it would matter.

I pasted real life counter examples to this. Artificial cochlea and
retinas.

> You can make a prosthetic appliance which your
> nervous system will make do with, but it can't replace the nervous
> system altogether.

At what point does the replacement magically stop working?

> The nervous system predicts and guesses. It can
> route around damage or utilize a device which it can understand how to
> use.

So it can use an artificial retina but not an artificial neuron?

> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>
> hl=en.
>

Jesse Mazer

unread,
Jul 13, 2011, 10:12:20 PM7/13/11
to everyth...@googlegroups.com


> Date: Wed, 13 Jul 2011 17:04:19 -0700
> Subject: Re: bruno list
> From: whats...@gmail.com
> To: everyth...@googlegroups.com


> >Again, all that matters is that the *outputs* that influence other neurons are just like those of a real neuron, any *internal* processes in the substitute are just supposed to be >artificial simulations of what goes on in a real neuron, so there might be simulated genes (in a simulation running on something like a silicon chip or other future computing >technology) but there'd be no need for actual DNA molecules inside the substitute.

> The assumption is that there is a meaningful difference between the
> processes physically within the cell and those that are input and
> output between the cells. That is not my view. Just as the glowing
> blue chair you are imagining now (is it a recliner? A futuristic
> cartoon?) is not physically present in any neuron or group of neurons
> in your skull - under any imaging system or magnification. My idea of
> 'interior' is different from the physical inside of the cell body of a
> neuron. It is the interior topology. It's not even a place, it's just
> a sensorimotive awareness of itself and it's surroundings - hanging on
> to it's neighbors, reaching out to connect, expanding and contracting
> with the mood of the collective. This is what consciousness is. This
> is who we are.

You're misunderstanding what I meant by "internal", I wasn't talking about subjective interiority (qualia), but *only* about the physical processes in the spatial interior of the cell. I am trying to first concentrate on external behavioral issues that don't involve qualia at all, to see whether your disagreement with Chalmers' argument is because you disagree with the basic starting premise that it would be possible to replace neurons by artificial substitutes which would not alter the *behavior* of surrounding neurons (or of the person as a whole), only after assuming this does Chalmers go on to speculate about what would happen to qualia as neurons were gradually replaced in this way. Remember this paragraph from my last post:

"Because I'm just talking about the behavioral aspects of consciousness now, since it's not clear if you actually accept or reject the premise that it would be possible to replace neurons with functional equivalents that would leave *behavior* unaffected (both the behavior of other nearby neurons, and behavior of the whole person in the form of muscle movement triggered by neural signals, including speech about what the person was feeling). If you do accept that premise, then we can move on to Chalmers' argument about the implausibility of dancing/fading qualia in situations where behavior is completely unaffected--you also have not really given a clear answer to the question of whether you think there could be situations where behavior is completely unaffected but qualia are changing or fading. But one thing at a time, first I want to focus on this issue of whether you accept that in principle it would be possible to replace neurons with "functional equivalents" which emit the same signals to other neurons but have a totally different internal structure, and whether you accept that this would leave behavior unchanged, both for nearby neurons and the muscle movements of the body as a whole."

The reason I want to separate these two issues, and first deal only with physical behaviors, is that in your original answer to my question about Chalmers' thought-experiment you made several comments suggesting there would be behavioral changes, like the suggestion that replacing parts of the brain with artificial substitutes would cause "dementia" (which normally leads to changes in behavior) and the suggestion that "the native neurology may strengthen it's remaining connections and attempt to compensate for the implants with neuroplasticity, routing around the 'damage'." So please, until we have this issue settled of whether it would be possible in principle to create substitutes which caused no behavioral changes in surrounding neurons or in the whole person, can we leave aside issues relating to qualia and subjectivity?



> >Everything internal to the boundary of the neuron is simulated, possibly using materials that have no resemblance to biological ones.

> It's a dynamic system, there is no boundary like that.

If you accept reductionism and accept that all interactions between the basic units are *local* ones, then you can divide up any complex system into a collection of volumes in absolutely any way you please (you don't have to pick volumes that correspond to 'natural' boundaries like the edges of a cell), and it will always be true that physical processes in one volume can only be influenced by other volumes via local influences (like molecules or photons) coming through that system's boundary. If you don't agree with this I don't think you understand the basic idea of a reductionist theory based on local interactions.

>The
> neurotransmitters are produced by and received within the neurons
> themselves.

Sure, but other neurons don't know anything about the history of neurotransmitter molecules arriving at their own "input" synapses, if exactly the same neurotransmitter molecules were arriving they wouldn't behave differently depending on whether those molecules had been synthesized inside a cell or were constructed by a nanobot or something.



> >So if you replace the inside of one volume with a very different system that nevertheless emits the same pattern of particles at the boundary of the volume, systems in other >adjacent volumes "don't know the difference" and their behavior is unaffected.

> No, I don't that's how living things work. Remember that people's
> bodies often reject living tissue transplanted from other human
> beings.

Why do you think that's a reason to reject the local reductionist principle I suggest? In a local reductionist theory, presumably the reason that my cells reject foreign tissue has to do with the foreign tissue giving off molecules that don't match the ones given off by my own cells, and my cells picking up those molecules and reacting to them. See for example the discussion of "histocompatibility molecules" at http://users.rcn.com/jkimball.ma.ultranet/BiologyPages/H/HLA.html

Are you suggesting that even if the molecules given off by foreign cells were no different at all from those given off by my own cells, my cells would nevertheless somehow be able to nonlocally sense that the DNA in the nuclei of these cells was foreign?


> >You didn't address my question about whether you agree or disagree with physical reductionism in my last post, can you please do that in your next response to me?

> I agree with physical reductionism as far as the physical side of
> things is concerned. 

Well, it's not clear to me that you understand the implications of physical reductionism based on your rejection of my comments about physical processes in one volume only being affected via signals coming across the boundary. Unless the issue is that you accept physical reductionism, but reject the idea that we can treat all interactions as being local ones (and again I would point out that while entanglement may involve a type of nonlocal interaction--though this isn't totally clear, many-worlds advocates say they can explain entanglement phenomena in a local way--because of decoherence, it probably isn't important for understanding how different neurons interact with one another). 


> >Because I'm just talking about the behavioral aspects of consciousness now, since it's not clear if you actually accept or reject the premise that it would be possible to replace >neurons with functional equivalents that would leave *behavior* unaffected

> I'm rejecting the premise that there is a such thing as a functional
> replacement for a neuron that is sufficiently different from a neuron
> that it would matter.

And is that because you reject the idea that in any volume of space, physical processes outside that volume can only be affected by processes in its interior via particles (or other local signals) crossing the boundary of that volume?



> >first I want to focus on this issue of whether you accept that in principle it would be possible to replace neurons with "functional equivalents" which emit the same signals to other >neurons but have a totally different internal structure, and whether you accept that this would leave behavior unchanged, both for nearby neurons and the muscle movements of the >body as a whole.

> This is tautological. You are making a nonsense distinction between
> it's 'internal' structure and what it does. If the internal structure
> is equivalent enough, then it will be functionally equivalent to other
> neurons and the organism at large.

I don't know what you mean by "functionally equivalent" though, are you using that phrase to suggest some sort of similarity in the actual molecules and physical structure of what's inside the boundary? My point is that it's perfectly possible to imagine replacing a neuron with something that has a totally different physical structure, like a tiny carbon nanotube computer, but that it's sensing incoming neurotransmitter molecules (and any other relevant physical inputs from nearby cells) and calculating how the original neuron would have behaved in response to those inputs if it were still there, and using those calculations to figure out what signals the neuron would have been sending out of the boundary, then making sure to send the exact same signals itself (again, imagine that it has a store of neurotransmitters which can be sent out of an artificial synapse into the synaptic gap connected to some other neuron). So it *is* "functionally equivalent" if by "function" you just mean what output signals it transmits in response to what input signals, but it's not functionally equivalent if you're talking about its actual internal structure.

Also note that these hypothetical carbon nanotube computers only need to emit actual neurotransmitters at points where they interface with regular biological cells. As you replace more and more biological cells with substitutes you could start to have synapses where one artificial neuron is connected to another artificial neuron, then they could dispense with the step of sending actual neurotransmitter molecules through the gap and instead just simulate this process to figure out how one artificial neuron should influence the other.

If it's not, then it won't be.
> Interior mechanics that produce organic molecules and absorb them
> through a semipermeable membrane are biological cells. If you can make
> something that does that out of something other than nucleic acids,
> then cool, but why bother? 

No one is suggesting this would be a useful thing to do practically, it's a philosophical thought-experiment. If you do accept that it would be possible in principle to gradually replace real neurons with artificial ones in a way that wouldn't change the behavior of the remaining real neurons and wouldn't change the behavior of the person as a whole, but with the artificial ones having a very different internal structure and material composition than the real ones, then we can move on to Chalmer's argument about why this sort of behavioral indistinguishability suggests qualia probably wouldn't change either. But as I said I don't want to discuss that unless we're clear on whether you accept the original premise of the thought-experiment.


> >Again, not talking about consciousness at the moment, just behaviors that we associate with consciousness. That's why, in answer to your question about synthetic water, I >imagined a robot whose limb movements depend on the motions of water in an internal tank, and pointed out that if you replaced the tank with a sufficiently good simulation, the >external limb movements of the robot shouldn't be any different.

> If you are interested in the behaviors of consciousness only, all you
> have to do is watch a youtube and you will see a simulated
> consciousness behaving. 

That's just a recording of something that actually happened to a biological consciousness, not a simulation which can respond to novel external stimuli (like new questions I can think to ask it) which weren't presented to any biological original.


> >My point was that if you agree that the basic notion of "Darwinian evolution" is purely a matter of organization and not the details of what a system is made of (Do you in fact agree >with that? Regardless of whether it might be *easier* to implement Darwinian evolution in an organic system, hopefully you wouldn't say it's in-principle impossible to implement >self-replication with heredity and mutation in a non-organic system?), then it's clear that in general it cannot be true that "Feature X which we see in organic systems is purely a >matter of organization" implies "We should expect to see natural examples of Feature X in non-organic systems as well".

> It's a false equivalence. Darwinian evolution is a relational
> abstraction and consciousness or life is a concrete experience.

But when you originally asked why we don't "see" consciousness in non-biological systems, I figured you were talking about the external behaviors we associate with consciousness, not inner experience. After all we have no way of knowing the inner experience of any system but ourselves, we only infer that other beings have similar inner experiences based on similar external behaviors. If you want to just talk about inner experience, again we should first clear up whether you can accept the basic premise of Chalmers' thought experiment, then if you do we can move on to talking about what it implies for inner experience.

Jesse

Craig Weinberg

unread,
Jul 14, 2011, 12:08:40 AM7/14/11
to Everything List
>If it is not present physically, then what causes a person to say "I
>am imagining a blue chair"?

A sensorimotive circuit. A sensory feeling which is a desire to
fulfill itself through the motive impulse to communicate that
statement.

>Could you please define this term? I looked it up but the
>definitions I found did not seem to fit.

Nerves are referred to as afferent and efferent also. My idea is that
all nerve functionality is sense (input) and motive (output). I would
say motor, but it's confusing because something like changing your
mind or making a choice is motive but not physically expressed as
motor activity, but I think that they are the same thing. I am
generalizing what nerves do to the level of physics, so that our
nerves are doing the same thing that all matter is doing, just
hypertrophied to host more meta-elaborated sensorimotive phenomena.

>There is such a thing as too low a level. What leads you to believe
>the neuron is the appropriate level to find qualia, rather than the
>states of neuron groups or the whole brain?

I didn't say it was. I was just talking about the more similar you can
get to imitating a human neuron, the more similar a brain based on
that imitation will be to having the potential for human
consciousness.

>You would have to show that the presence of DNA in part determines the
>evolution of the brains neural network. If not, it is as relevant to
>you and your mind as the neutrinos passing through you.

Chromosome mutations cause mutations in the brain's neural network, do
they not? btw, I don't interpret neutrinos, photons, or other massless
chargeless phenomena as literal particles. QM is a misinterpretation.
Accurate, but misinterpreted.

>> A digital simulation is just a pattern in an abacus.

>The state of an abacus is just a number, not a process. I think you
>may not have a full understanding of the differences between a turing
>machine and a string of bits. A Turing machine can mimick any process
>that is defineable and does not take an infinite number of steps.
>Turing machines are dynamic, self-directed entities. This
>distinguishes them from cartoons, YouTube videos and the state if an
>abacus.

A pattern is not necessarily static, especially not an abacus, the
purpose of which is to be able to change the positions to any number.
Just like a cartoon. If you are defining Turing machines as self-
directed entities then you have already defined them as conscious, so
it's a fallacy to present it as a question. Since I think that a
machine cannot have a self, but is instead the self's perception of
the self's opposite, I'm not compelled by any arguments which imagine
that purely quantitative phenomena (if there were such a thing) can be
made to feel.

>Then, if you deny the logical possibilitt of zombies, or fading
>qualia, you must accept such an emulation of a human mind would be
>equally conscious.

These ideas are not applicable in my model of consciousness and it's
relation to neurology.

>The idea behind a computer simulation of a mind is not to make
>something that looks like a brain but to make something that behaves
>and works like a brain.

I think that for it to work exactly like a brain it has to be a brain.
If you want something that behaves like an intelligent automaton, then
you can use a machine made of inorganic matter. If you want something
that feels and behaves like a living organism then you have to create
a living organism out of matter that can self replicate and die.

>Rejection requires the body knowing there is a difference, which is
>against the starting assumption.

If you are already defining something as biologically identical, then
you are effectively asking 'if something non-biological were
biological, would it perform biological functions?'

>I pasted real life counter examples to this. Artificial cochlea and
>retinas.

Those are not replacements for neurons, they are prostheses for a
nervous system. Big difference. I can replace a car engine with
horses, but I can't replace a horse's brain with a car engine.

>At what point does the replacement magically stop working?

At what point does cancer magically stop you from waking up?

>So it can use an artificial retina but not an artificial neuron?

A neuron can use an artificial neuron but a person can't use an
artificial neuron except through a living neuron.

Craig

On Jul 13, 9:16 pm, Jason Resch <jasonre...@gmail.com> wrote:
> On Jul 13, 2011, at 7:04 PM, Craig Weinberg <whatsons...@gmail.com>  
> ...
>
> read more »

Jason Resch

unread,
Jul 14, 2011, 1:55:42 AM7/14/11
to everyth...@googlegroups.com
On Wed, Jul 13, 2011 at 11:08 PM, Craig Weinberg <whats...@gmail.com> wrote:
>If it is not present physically, then what causes a person to say "I
>am imagining a blue chair"?

A sensorimotive circuit. A sensory feeling which is a desire to
fulfill itself through the motive impulse to communicate that
statement.

But physical effects must come from physical causes unless your theory involves some form of dualism.  The imagined image in the mind has some physical representation, otherwise any communication regarding that imagined image would be coming from no where.
 

>Could you please define this term?  I looked it up but the
>definitions  I found did not seem to fit.

Nerves are referred to as afferent and efferent also. My idea is that
all nerve functionality is sense (input) and motive (output). I would
say motor, but it's confusing because something like changing your
mind or making a choice is motive but not physically expressed as
motor activity, but I think that they are the same thing. I am
generalizing what nerves do to the level of physics, so that our
nerves are doing the same thing that all matter is doing, just
hypertrophied to host more meta-elaborated sensorimotive phenomena.

>There is such a thing as too low a level.  What leads you to believe
>the neuron is the appropriate level to find qualia, rather than the
>states of neuron groups or the whole brain?

I didn't say it was. I was just talking about the more similar you can
get to imitating a human neuron, the more similar a brain based on
that imitation will be to having the potential for human
consciousness.

>You would have to show that the presence of DNA in part determines the
>evolution of the brains neural network.  If not, it is as relevant to
>you and your mind as the neutrinos passing through you.

Chromosome mutations cause mutations in the brain's neural network, do
they not?

Perhaps very rarely it could, but this would be more a malfunction than general behavior.  The question is, what does DNA have to do with the function of an active brain which is thinking or experiencing?  If the neurons behaved the same way without it, why should consciousness be impacted?
 
btw, I don't interpret neutrinos, photons, or other massless
chargeless phenomena as literal particles. QM is a misinterpretation.
Accurate, but misinterpreted.

Whatever you consider them to be, they are physical but not thought to be important to the general operation of the brain.  My original point is there is a lot of noise, and perhaps included in that noise is all the biochemistry itself going on in the background while neurons perform their function.  And therefore, anything which is noise doesn't need to be replicated in an artificial production of a brain.
 

>> A digital simulation is just a pattern in an abacus.

>The state of an abacus is just a number, not a process.  I think you
>may not have a full understanding of the differences between a turing
>machine and a string of bits.  A Turing machine can mimick any process
>that is defineable and does not take an infinite number of steps.
>Turing machines are dynamic, self-directed entities.  This
>distinguishes them from cartoons, YouTube videos and the state if an
>abacus.

A pattern is not necessarily static, especially not an abacus, the
purpose of which is to be able to change the positions to any number.
Just like a cartoon.

Okay, but with an abacaus, or a cartoon, someone else is driving it, and perhaps randomly.  A cartoon does not draw itself, nor an abacus perform computations on its own.
 
If you are defining Turing machines as self-
directed entities then you have already defined them as conscious, so
it's a fallacy to present it as a question.

Ignore the "self" in self-directed, it was intended to mean they are autonomous, not define that they are conscious.
 
Since I think that a
machine cannot have a self, but is instead the self's perception of
the self's opposite, I'm not compelled by any arguments which imagine
that purely quantitative phenomena (if there were such a thing) can be
made to feel.

"Purely quantitative" suggests that the only values that can be represented by a machine are pure quantities (numbers, values, magnitudes).  Yet a Turing machine can represent an infinite number of relations which are not purely quantitative.  For example, an algorithm might determine whether an input number is prime or not, and based on the result set a bit as a 1 or a 0.  Now if that bit is an input to another function, that bit no longer represents the quantity of 1 or 0, but instead now represents the "qualitative" property of the input number's primality or compositeness.  There may be other qualitative values that with the right processing and interpretation by the right functions could correspond to qualitative properties such as colors.  You can't write off Turing machines as only dealing with numbers.  The possible relations, functions, and processes a Turing machine can implement result in an infinitely varied, deep, complex, and rich landscape of possibilities.


>Then, if you deny the logical possibilitt of zombies, or fading
>qualia, you must accept such an emulation of a human mind would be
>equally conscious.

These ideas are not applicable in my model of consciousness and it's
relation to neurology.

Either zombies are possible within your model or they are not.  Either fading qualia is possible in your model or it is not.  You can't define them as irrelevant in your theory to  avoid answering the though questions. :-)
 

>The idea behind a computer simulation of a mind is not to make
>something that looks like a brain but to make something that behaves
>and works like a brain.

I think that for it to work exactly like a brain it has to be a brain.

 
If you want something that behaves like an intelligent automaton, then
you can use a machine made of inorganic matter.

Okay I agree with this so far.
 
If you want something
that feels and behaves like a living organism

I am confused, are you saying an inorganic machine can only behave like an automaton, or can it behave like a living organism?  Do you believe it is possible for an inorganic machine to exhibit identical external behavior to a living organism in all situations?  (A YouTube video can't respond to questions, and therefore would not count)
 
then you have to create
a living organism out of matter that can self replicate and die.

What does self-replication and death have to do with what a mind feels at any point in time?  Aren't eunuchs conscious, what about someone who planned to freeze himself so he wouldn't die?
 

>Rejection requires the body knowing there is a difference, which is
>against the starting assumption.

If you are already defining something as biologically identical, then
you are effectively asking 'if something non-biological were
biological, would it perform biological functions?'

It was not identical, the interfaces, all the points that made contact with the outside were identical but the insides were completely different.
 

>I pasted real life counter examples to this.  Artificial cochlea and
>retinas.

Those are not replacements for neurons,

Actually the retina prosthesis replaces neurons which perform processing, and thus those neurons are considered an extension of the brain.
 
they are prostheses for a
nervous system. Big difference.

What is different about neurons in the nervous system vs. neurons in the brain?  Why is it we can substitute neurons in the nervous system without problem, but you suggest this fails if we move any deeper into the brain?  To me, the only difference is the complex way in which they are connected.
 
I can replace a car engine with
horses, but I can't replace a horse's brain with a car engine.

>At what point does the replacement magically stop working?

At what point does cancer magically stop you from waking up?


Cancer cells don't serve as functional replacements for healthy cells, where according to the thought experiment, the neural prosthesis would.  The question of when consciousness suddenly disappears, fades, dances, etc., if it does at all, during a neuron replacement is an interesting and illuminating question for any theory of mind, and it is something you should attempt to answer using your theory.
 
>So it can use an artificial retina but not an artificial neuron?

A neuron can use an artificial neuron but a person can't use an
artificial neuron except through a living neuron.


Interesting.  So do you think a person could have every part of their brain substituted with a prosthesis, with the exception of one neuron, and still be conscious?  Why or why not?

Jason
Message has been deleted

Craig Weinberg

unread,
Jul 14, 2011, 8:45:55 AM7/14/11
to Everything List
>You're misunderstanding what I meant by "internal", I wasn't talking about
>subjective interiority (qualia), but *only* about the physical processes in
>the spatial interior of the cell. I am trying to first concentrate on
>external behavioral issues that don't involve qualia at all, to see whether
>your disagreement with Chalmers' argument is because you disagree with the
>basic starting premise that it would be possible to replace neurons by
>artificial substitutes which would not alter the *behavior* of surrounding
>neurons (or of the person as a whole), only after assuming this does
>Chalmers go on to speculate about what would happen to qualia as neurons
>were gradually replaced in this way. Remember this paragraph from my last
>post:

In my model, physical processes are just the exterior, like clothing
of the
qualia (perceivable experiences). There is no such thing as
external
behavior that doesn't involve qualia, that's my point. It's all one
thing -
sensorimotive perception of relativistic electromagnetism. I think
that in
the best case scenario, what happens when you virtualize your brain
with a
non-biological neuron emulation is that you gradually lose
consciousness but
the remaining consciousness has more and more technology at it's
disposal.
You can't remember your own name but when asked, there would be
a
meaningless word that comes to mind for no reason. To me, the only
question
is how virtual is virtual. If you emulate the biology, that's a
completely
different scenario than running a logical program on a chip. Logic
doesn't
ooze
serotonin.

>Are you suggesting that even if the molecules given off by foreign cells
>were no different at all from those given off by my own cells, my cells
>would nevertheless somehow be able to nonlocally sense that the DNA in the
>nuclei of these cells was foreign?

It's not about whether other cells would sense the imposter neuron,
it's
about how much of an imposter the neuron is. If acts like a real cell
in
every physical way, if another organism can kill it and eat it
and
metabolize it completely then you pretty much have a cell. Whatever
cannot
be metabolized in that way is what potentially detracts from the
ability to
sustain consciousness. It's not your cells that need to sense DNA,
it's the
question of whether a brain composed entirely of, or significantly of
cells
lacking DNA would be conscious in the same way as a
person.

>Well, it's not clear to me that you understand the implications of physical
>reductionism based on your rejection of my comments about physical processes
>in one volume only being affected via signals coming across the boundary.
>Unless the issue is that you accept physical reductionism, but reject the
>idea that we can treat all interactions as being local ones (and again I
>would point out that while entanglement may involve a type of nonlocal
>interaction--though this isn't totally clear, many-worlds advocates say they
>can explain entanglement phenomena in a local way--because of decoherence,
>it probably isn't important for understanding how different neurons interact
>with one another).

It's not clear that you are understanding that my model of physics is
not
the same as yours. Imagine an ideal glove that is white on the outside
and
on the inside it feels like latex. As you move your hand in the glove
you
feel all sorts of things on the inside. Textures, shapes. etc. From
the
outside you see different patterns appearing on it. When you clench
your
fist, you can see right through the glove to your hand, but when you
do,
your hand goes completely numb and you can't feel the glove. What you
are
telling me is that if you make a glove that looks exactly like this
crazy
glove, if it satisfies all glove like properties such that it makes
these
crazy designs on the outside, that it must be having the same effect
on the
inside. My position is that no, not unless it is close enough to the
real
clove physically that it produces the same effects on the inside,
which you
cannot know unless you are wearing the
glove.

>And is that because you reject the idea that in any volume of space,
>physical processes outside that volume can only be affected by processes in
>its interior via particles (or other local signals) crossing the boundary of
>that volume?

No, it's because the qualia possible in inorganic systems is limited
to
inorganic qualia. Think of consciousness as DNA. Can you make DNA out
of
string? You could make a really amazing model of it out of string, but
it's
not going to do what DNA does. You are saying, well what if I make DNA
out
of something that acts just like DNA? I'm asking, like what? If it
acts like
DNA in every way, then it isn't an emulation, it's just DNA by another
name.

>I don't know what you mean by "functionally equivalent" though, are you
>using that phrase to suggest some sort of similarity in the actual molecules
>and physical structure of what's inside the boundary?

I'm using that phrase because you are. I'm just saying that what the
cell is
causes what the cell does. You can try to change what the cell is but
retain
what you think is what the cell does, but how much you change it
increases
the odds that you are changing something that you have no way of
knowing is
important.

>My point is that it's perfectly possible to imagine replacing a neuron with
>something that has a totally different physical structure, like a tiny
>carbon nanotube computer, but that it's sensing incoming neurotransmitter
>molecules (and any other relevant physical inputs from nearby cells) and
>calculating how the original neuron would have behaved in response to those
>inputs if it were still there, and using those calculations to figure out
>what signals the neuron would have been sending out of the boundary, then
>making sure to send the exact same signals itself (again, imagine that it
>has a store of neurotransmitters which can be sent out of an artificial
>synapse into the synaptic gap connected to some other neuron). So it *is*
>"functionally equivalent" if by "function" you just mean what output signals
>it transmits in response to what input signals, but it's not functionally
>equivalent if you're talking about its actual internal structure.

But what the signals and neurotransmitters are coming out of is
not
functionally equivalent. The real thing feels and has intent, not
calculates
and imitates. You can't build a machine that feels and has intent out
of
basic units that can only calculate at imitate. It just scales up to
a
sentient being vs a spectacular
automaton.

>If you do accept that it would be possible in principle to gradually replace
>real neurons with artificial ones in a way that wouldn't change the behavior
>of the remaining real neurons and wouldn't change the behavior of the person
>as a whole, but with the artificial ones having a very different internal
>structure and material composition than the real ones, then we can move on
>to Chalmer's argument about why this sort of behavioral indistinguishability
>suggests qualia probably wouldn't change either. But as I said I don't want
>to discuss that unless we're clear on whether you accept the original
>premise of the thought-experiment.

It all depends how different the artificial neurons are. There might
be
other recipes for consciousness and life, but so far, we have no
reason to
believe that inorganic logic can sustain either. For the purposes of
this
thread, let's say no. If it's artificial enough to be called
artificial then
the consciousness associated with it is also
inauthentic.

>That's just a recording of something that actually happened to a biological
>consciousness, not a simulation which can respond to novel external stimuli
>(like new questions I can think to ask it) which weren't presented to any
>biological original.

That's easy. You just make a few hundred YouTubes and associate them
with
some AGI logic. Basically make a video ELIZA (which would actually be
a
fantastic doctorate thesis I would think). Now you can have a
conversation
with your YouTube person in real time. You could even splice
together
phonemes to make them just able to speak English in general and then
hook
them up to a Google translation. Would you then say that if the
AGI
algorithms were good enough - functionally equivalent to human
intelligence
in every way, that the YouTube was
conscious?

>But when you originally asked why we don't "see" consciousness in
>non-biological systems, I figured you were talking about the external
>behaviors we associate with consciousness, not inner experience. After all
>we have no way of knowing the inner experience of any system but ourselves,
>we only infer that other beings have similar inner experiences based on
>similar external behaviors.

That's what I'm trying to tell you. Consciousness is nothing but
inner
experience. It has no external behaviors, we just can recognize our
own
feelings in other things when we can see them do something that
reminds us
of
ourselves.

> If you want to just talk about inner experience, again we should first
>clear up whether you can accept the basic premise of Chalmers' thought
>experiment, then if you do we can move on to talking about what it implies
>for inner experience.

I don't want to talk about inner experience unless you want to. I want
to talk about
fundamental reordering of the cosmos, which if it were correct, would
be
staggeringly important and I have not seen anywhere
else:

1. Mind and body are not merely separate, but perpendicular
topologies of
the same ontological continuum of
sense.
2. The interior of electromagnetism is sensorimotive, the interior
of
determinism is free will, and the interior of general relativity
is

perception.
3. Quantum Mechanics is a misinterpretation of atomic quorum
sensing.
4. Time, space, and gravity are void. Their effects are explained
by
perceptual relativity and sensorimotor
electromagnetism.
5. The "speed of light" *c* is not a speed it's a condition
of
nonlocality or absolute velocity, representing a third state of
physical
relation as the opposite of both stillness and
motion.

It's not about meticulous logical deduction, it's about grasping
the
largest, broadest description of the cosmos possible which doesn't
leave
anything out. I just want to see if this map flies, and if not, why
not?


On Jul 13, 10:12 pm, Jesse Mazer <laserma...@hotmail.com> wrote:
> > Date: Wed, 13 Jul 2011 17:04:19 -0700> Subject: Re: bruno list> From: whatsons...@gmail.com> To: everyth...@googlegroups.com> > >Again, all that matters is that the *outputs* that influence other neurons are just like those of a real neuron, any *internal* processes in the substitute are just supposed to be >artificial simulations of what goes on in a real neuron, so there might be simulated genes (in a simulation running on something like a silicon chip or other future computing >technology) but there'd be no need for actual DNA molecules inside the substitute.> > The assumption is that there is a meaningful difference between the> processes physically within the cell and those that are input and> output between the cells. That is not my view. Just as the glowing> blue chair you are imagining now (is it a recliner? A futuristic> cartoon?) is not physically present in any neuron or group of neurons> in your skull - under any imaging system or magnification. My idea of> 'interior' is different from the physical inside of the cell body of a> neuron. It is the interior topology. It's not even a place, it's just> a sensorimotive awareness of itself and it's surroundings - hanging on> to it's neighbors, reaching out to connect, expanding and contracting> with the mood of the collective. This is what consciousness is. This> is who we are.You're misunderstanding what I meant by "internal", I wasn't talking about subjective interiority (qualia), but *only* about the physical processes in the spatial interior of the cell. I am trying to first concentrate on external behavioral issues that don't involve qualia at all, to see whether your disagreement with Chalmers' argument is because you disagree with the basic starting premise that it would be possible to replace neurons by artificial substitutes which would not alter the *behavior* of surrounding neurons (or of the person as a whole), only after assuming this does Chalmers go on to speculate about what would happen to qualia as neurons were gradually replaced in this way. Remember this paragraph from my last post:"Because I'm just talking about the behavioral aspects of consciousness now, since it's not clear if you actually accept or reject the premise that it would be possible to replace neurons with functional equivalents that would leave *behavior* unaffected (both the behavior of other nearby neurons, and behavior of the whole person in the form of muscle movement triggered by neural signals, including speech about what the person was feeling). If you do accept that premise, then we can move on to Chalmers' argument about the implausibility of dancing/fading qualia in situations where behavior is completely unaffected--you also have not really given a clear answer to the question of whether you think there could be situations where behavior is completely unaffected but qualia are changing or fading. But one thing at a time, first I want to focus on this issue of whether you accept that in principle it would be possible to replace neurons with "functional equivalents" which emit the same signals to other neurons but have a totally different internal structure, and whether you accept that this would leave behavior unchanged, both for nearby neurons and the muscle movements of the body as a whole."The reason I want to separate these two issues, and first deal only with physical behaviors, is that in your original answer to my question about Chalmers' thought-experiment you made several comments suggesting there would be behavioral changes, like the suggestion that replacing parts of the brain with artificial substitutes would cause "dementia" (which normally leads to changes in behavior) and the suggestion that "the native neurology may strengthen it's remaining connections and attempt to compensate for the implants with neuroplasticity, routing around the 'damage'." So please, until we have this issue settled of whether it would be possible in principle to create substitutes which caused no behavioral changes in surrounding neurons or in the whole person, can we leave aside issues relating to qualia and subjectivity?> > >Everything internal to the boundary of the neuron is simulated, possibly using materials that have no resemblance to biological ones.> > It's a dynamic system, there is no boundary like that.If you accept reductionism and accept that all interactions between the basic units are *local* ones, then you can divide up any complex system into a collection of volumes in absolutely any way you please (you don't have to pick volumes that correspond to 'natural' boundaries like the edges of a cell), and it will always be true that physical processes in one volume can only be influenced by other volumes via local influences (like molecules or photons) coming through that system's boundary. If you don't agree with this I don't think you understand the basic idea of a reductionist theory based on local interactions.>The> neurotransmitters are produced by and received within the neurons> themselves.Sure, but other neurons don't know anything about the history of neurotransmitter molecules arriving at their own "input" synapses, if exactly the same neurotransmitter molecules were arriving they wouldn't behave differently depending on whether those molecules had been synthesized inside a cell or were constructed by a nanobot or something.> > >So if you replace the inside of one volume with a very different system that nevertheless emits the same pattern of particles at the boundary of the volume, systems in other >adjacent volumes "don't know the difference" and their behavior is unaffected.> > No, I don't that's how living things work. Remember that people's> bodies often reject living tissue transplanted from other human> beings.Why do you think that's a reason to reject the local reductionist principle I suggest? In a local reductionist theory, presumably the reason that my cells reject foreign tissue has to do with the foreign tissue giving off molecules that don't match the ones given off by my own cells, and my cells picking up those molecules and reacting to them. See for example the discussion of "histocompatibility molecules" athttp://users.rcn.com/jkimball.ma.ultranet/BiologyPages/H/HLA.htmlAreyou suggesting that even if the molecules given off by foreign cells were no different at all from those given off by my own cells, my cells would nevertheless somehow be able to nonlocally sense that the DNA in the nuclei of these cells was foreign?> > >You didn't address my question about whether you agree or disagree with physical reductionism in my last post, can you please do that in your next response to me?> > I agree with physical reductionism as far as the physical side of> things is concerned. Well, it's not clear to me that you understand the implications of physical reductionism based on your rejection of my comments about physical processes in one volume only being affected via signals coming across the boundary. Unless the issue is that you accept physical reductionism, but reject the idea that we can treat all interactions as being local ones (and again I would point out that while entanglement may involve a type of nonlocal interaction--though this isn't totally clear, many-worlds advocates say they can explain entanglement phenomena in a local way--because of decoherence, it probably isn't important for understanding how different neurons interact with one another). > > >Because I'm just talking about the behavioral aspects of consciousness now, since it's not clear if you actually accept or reject the premise that it would be possible to replace >neurons with functional equivalents that would leave *behavior* unaffected> > I'm rejecting the premise that there is a such thing as a functional> replacement for a neuron that is sufficiently different from a neuron> that it would matter.And is that because you reject the idea that in any volume of space, physical processes outside that volume can only be affected by processes in its interior via particles (or other local signals) crossing the boundary of that volume?> > >first I want to focus on this issue of whether you accept that in principle it would be possible to replace neurons with "functional equivalents" which emit the same signals to other >neurons but have a totally different internal structure, and whether you accept that this would leave behavior unchanged, both for nearby neurons and the muscle movements of the >body as a whole.> > This is tautological. You are making a nonsense distinction between> it's 'internal' structure and what it does. If the internal structure> is equivalent enough, then it will be functionally equivalent to other> neurons and the organism at large.I don't know what you mean by "functionally equivalent" though, are you using that phrase to suggest some sort of similarity in the actual molecules and physical structure of what's inside the boundary? My point is that it's perfectly possible to imagine replacing a neuron with something that has a totally different physical structure, like a tiny carbon nanotube computer, but that it's sensing incoming neurotransmitter molecules (and any other relevant physical inputs from nearby cells) and calculating how the original neuron would have behaved in response to those inputs if it were still there, and using those calculations to figure out what signals the neuron would have been sending out of the boundary, then making sure to send the exact same signals itself (again, imagine that it has a store of neurotransmitters which can be sent out of an artificial synapse into the synaptic gap connected to some other neuron). So it *is* "functionally equivalent" if by "function" you just mean what output signals it transmits in response to what input signals, but it's not functionally equivalent if you're talking about its actual internal structure.Also note that these hypothetical carbon nanotube computers only
>
> ...
>
> read more »

Bruno Marchal

unread,
Jul 15, 2011, 4:39:48 AM7/15/11
to everyth...@googlegroups.com

On 14 Jul 2011, at 14:39, Craig Weinberg wrote:

I don't want to talk about inner experience. I want to talk about my fundamental reordering of the cosmos, which if it were correct, would be staggeringly important and I have not seen anywhere else:

  1. Mind and body are not merely separate, but perpendicular topologies of the same ontological continuum of sense.
Could you define "perpendicular topologies"? You say you don't study math, so why use mathematical terms (which seems non sensical for a mathematicians, unless you do a notion of set of topologies with some scalar products, but then you should give it.



  1. The interior of electromagnetism is sensorimotive, the interior of determinism is free will, and the interior of general relativity is perception.
What do you mean by interior of electromagnetism.


  1. Quantum Mechanics is a misinterpretation of atomic quorum sensing.
This seems like non sense.



  1. Time, space, and gravity are void. Their effects are explained by perceptual relativity and sensorimotor electromagnetism.
?


  1. The "speed of light" c is not a speed it's a condition of nonlocality or absolute velocity, representing a third state of physical relation as the opposite of both stillness and motion.
?



It's not about meticulous logical deduction, it's about grasping the largest, broadest description of the cosmos possible which doesn't leave anything out. I just want to see if this map flies, and if not, why not?


Anyway, you seem to presuppose some physicalness, and so by the UDA reasoning, you need a physics and a cognitive science with (very special) infinities. This seems to make the mind body problem (MB), and its formulation, artificially more complex, without motivation. Without an attempt to make things clearer I can hardly add anything. Perhaps understanding the MB problem in the comp context might help you to formulate it in some non-comp context.
 
Bruno



m.a.

unread,
Jul 15, 2011, 9:46:51 AM7/15/11
to everyth...@googlegroups.com
You should get work helping Rachel collect material. You'd be a natural.    m
 
 

Craig Weinberg

unread,
Jul 15, 2011, 11:19:03 AM7/15/11
to Everything List
>Could you define "perpendicular topologies"? You say you don't study
>math, so why use mathematical terms (which seems non sensical for a
>mathematicians, unless you do a notion of set of topologies with some
>scalar products, but then you should give it.

Yeah, I'm not sure if I mean it literally or figuratively. Maybe
better to say a pseudo-dualistic, involuted topological continuum?
Stephen was filling me in on some of the terminology. I'm looking at a
continuum of processes which range from discrete, [dense, public,
exterior, generic, a-signifying, literal...at the extreme would be
local existential stasis, fixed values, occidentialism (Only Material
Matter Matters)] to the compact [diffuse, private, interior,
proprietary, signifying, figurative...at the extreme would be non
local essential exstasis, orientalism (Anything Can Mean Everything)].
They are perpendicular because it's not as if there is a one to one
correspondence between each neuron and a single feeling, feelings are
chords of entangled sensorimotive events which extend well beyond the
nervous system.

Since the duality is polarized in every possible way, I want to make
it clear that to us, they appear perfectly opposite in their nature,
so I say perpendicular. Topology because it's a continuum with an XY
axis (Y being quantitative magnitude of literal scale on the
occidental side; size/scale, density, distance, and qualitative
magnitude on the oriental side; greatness/significance, intensity,
self-referentiality...these aren't an exhaustive list, I'm just
throwing out adjectives.). I'm not averse to studying the concepts of
mathematics, I'm just limited in how I can make sense of them and how
much I want to use them. I'm after more of an F=ma nugget of
simplicity than a fully explicated field equation. I want the most
elementary possible conception of what the cosmos seems to be.

>What do you mean by interior of electromagnetism.

The subjective correlate of all phenomena which we consider
electromagnetic. It could be more of an ontological interiority -
throughput.. I'm saying that energy is a flow of experiences contained
by the void of energy - and energy, all energy is change or difference
in what is sensed or intended. Negentropy. If there is no change in
what something experiences, there is no time. So it makes sense that
what we observe in the brain as being alterable with electromagnetism
translates as changes in sensorimotor experience.

>> Quantum Mechanics is a misinterpretation of atomic quorum sensing.
>This seems like non sense.

Didn't mean to be inflammatory there. What I mean to say is that the
popular layman's understanding of QM as how the microcosm works - the
Standard Model of literal particles in a vacuum with strange
behaviors, is inside out. What we are actually detecting is
particulate moods of sensorimotive events shared by our measuring
equipment (including ourselves) and the thing that we think is being
measured.

>>> Time, space, and gravity are void. Their effects are explained by
>> perceptual relativity and sensorimotor electromagnetism.

>?

Time is just the dialectic of change and the cumulative density of
it's own change residue carried forward. Space is just the
singularity's way of dividing itself existentially. If you have a
universe of one object, there is no space. Space is only the relation
of objects to each other. No relation, no space. Perceptual relativity
is meta-coherence, how multiple levels and scales of sensorimotor
electromagnetic patterns are recapitulated (again cumulative
entanglement...retention of pattern through iconicized
representation).

>> The "speed of light" c is not a speed it's a condition of
>> nonlocality or absolute velocity, representing a third state of
>> physical relation as the opposite of both stillness and motion.

>?
Stillness is a state which appears unchanging from the outside, and
from the inside the universe is changing infinitely fast. Motion is
the state of change relative to other phenomena, the faster you move
the more time slows down for you relative to other index phenomena. c
is the state of absolute change - being change+non change itself so
that it appears non-local from the outside, ubiquitous and absent, and
from the inside the cosmos is still.

Any better?

Craig

Bruno Marchal

unread,
Jul 17, 2011, 12:57:17 PM7/17/11
to everyth...@googlegroups.com

No it is worst, I'm afraid. I hope you don't mind when I am being
frank. In fundamental matter, you have to explain things from scratch.
Nothing can be taken for granted, and you have to put your assumptions
on the table, so that we avoid oblique comments and vocabulary
dispersion.
You say yourself that you don't know if you talk literally or
figuratively. That's says it all, I think. You should make a choice,
and work from there. Personally, I am a literalist, that is I am
applying the scientific method. That is, for the mind-body problem,
actually the hard part for scientist, consists in understanding that
once we assume the comp hyp, we can translate "philosophical problems"
into "mathematical and/or physical problems".
Philosophers don't like that (especially continental one), but this
fits with their usual tradition of defending academic territories and
position (food). It is natural, like in (pseudo)-religion, they are
not very happy when people use the scientific methods to invade their
fields of study.
But this means that, in interdisciplinary research, you must be able
to be understood by a majority in each field you are crossing. Even
when you are successful on this, you will have to find the people
having the courage to study the connection between the domains.
A lot of scientists still believe that notion like mind,
consciousness, are crackpot notion, and when sincere people try to
discuss on those notions, you can be amazed by the tons of
difficulties. I have nothing against some attempts toward a
materialist solution of the MB P., and in that case at least we know
(or should know, or refute ...) that we have to abandon even
extremally weak version of mechanism. But then, this looks like
introducing special (and unknown) infinities in the MB puzzle, so I am
not interested, without providing some key motivation.

In this list people are open minded for the "everything exists" type
of theories, like Everett Many-Worlds, with an open mind on
computationalism (Schmidhuber) and mathematicalism or immaterialism
(Tegmark). So my own contribution was well suited, given that I
propose an argument showing that if we believe that we can survive
with a digitalizable body, then we dispose, ONLY, of a, yet, very
solid constructive, and highly complex structured, version of an
"everything": all computations, (in the precise arithmetical sense of
sigma_1 arithmetical relations, and their (coded) proofs. I show also
that we dispose of a very natural notion of observers, the universal
machines, and that among them we can already "interview" those which
can prove, know, guess, feel about their internal views on realities.

Everett's move to embed the physicist subject *in* the object matter
of the physical equation (SWE) extends itself in the arithmetical
realm, with the embedding of the mathematician *in* arithmetic, once
we take the possibility of our local digitalization seriously enough
into consideration.

This shows mainly that, with comp, the mind-body problem is two times
more complex than what people usually think. Not only we have to
explain qualia/consciousness from the number, but we have to explain
quanta/matter from the numbers too.

But universal machine have a natural theory of thought (the laws of
Boole), and a natural theory of mind (the Gödel-Löb-Solovay logics of
self-reference), and by the very existence of computer science, in
fine, you get a translation of the body problem in computer science,
which makes it automatically a problem in number theory.

Bruno

> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/

Craig Weinberg

unread,
Jul 17, 2011, 6:54:40 PM7/17/11
to Everything List
>No it is worst, I'm afraid. I hope you don't mind when I am being
>frank. In fundamental matter, you have to explain things from scratch.
>Nothing can be taken for granted, and you have to put your assumptions
>on the table, so that we avoid oblique comments and vocabulary
>dispersion.

No, I don't mind frankness at all. I'm trying not to assume anything
if I can help it. I'm just correlating all common phenomena in the
cosmos in a simple form which focuses on their symmetry, and I think
accurately explains the relation of consciousness (or meta-perception,
which is meta-sensorimotive experience) to electromagnetic patterns in
the brain, and by extension, to explain Relativity as the
perceptibility of matter in general.

>You say yourself that you don't know if you talk literally or
>figuratively. That's says it all, I think. You should make a choice,
>and work from there.

It's not my intention to make a good theory, it's my intention to
describe the cosmos as it actually is. The cosmos is both literal and
figurative, and I believe it's quality of literalness and
figurativeness is part of the same continuum of objectivity-
subjectivity, discrete-compact, nihilistic existence-solipsistic
essence, etc. I don't know if it's useful to postulate a literal
topology when half of the continuum is figurative and experiential. It
seems like it would lead to a misunderstanding, but at the same time,
I believe that it is perpendicular ontologically just not in the sense
that the two topologies could be modeled in space as perpendicular
regions. One of the topologies is perpendicular to the idea of space
itself.

>This shows mainly that, with comp, the mind-body problem is two times
>more complex than what people usually think. Not only we have to
>explain qualia/consciousness from the number, but we have to explain
>quanta/matter from the numbers too.

I think the mind-body problem is resolved in my topology. It's simple.
Qualia and quanta are both elemental intersecting topologies which
meet on one end as maximally dimorphic (ie our ordinary, mundane
perception of subjective self vs external objects) and on the other
end as profoundly indistinguishable (quantum mechanics, shamanism
produce logical dualisms, monastic detachment). Qualia scales up as
perception, quanta scales up as relativity. They are the same meta
organizing principle: sensorimotive electromagnetism squaring itself.

Craig

Stathis Papaioannou

unread,
Jul 19, 2011, 7:26:41 AM7/19/11
to everyth...@googlegroups.com
On Thu, Jul 14, 2011 at 10:45 PM, Craig Weinberg <whats...@gmail.com> wrote:

>  It's not about whether other cells would sense the imposter neuron,
> it's
> about how much of an imposter the neuron is. If acts like a real cell
> in
> every physical way, if another organism can kill it and eat it
> and
> metabolize it completely then you pretty much have a cell. Whatever
> cannot
> be metabolized in that way is what potentially detracts from the
> ability to
> sustain consciousness. It's not your cells that need to sense DNA,
> it's the
> question of whether a brain composed entirely of, or significantly of
> cells
> lacking DNA would be conscious in the same way as a
> person.

DNA doesn't play a direct role in neuronal to neuronal interaction. It
is necessary for the synthesis of proteins, so without it the neuron
would be unable to, for example, produce more surface receptors or the
essential proteins needed for cell survival; however, if the DNA were
destroyed the neuron would carry on functioning as per usual for at
least a few minutes. Now, you speculate that consciousness may somehow
reside in the components of the neuron and not just in its function,
so that perhaps if the DNA were destroyed the consciousness would be
affected - let's say for the sake of simplicity that it too would be
destroyed - even in the period the neuron was functioning normally. If
that is so, then if all the neurons in your visual cortex were
stripped of their DNA you would be blind: your visual qualia would
disappear. But if all the neurons in your visual cortex continued to
function normally, they would send the normal signals to the rest of
your brain and the rest of your brain would behave as if you could
see: that is, you would accurately describe objects put in front of
your eyes and honestly believe that you had normal vision. So how
would this state, behaving as if you had normal vision and believing
you had normal vision, differ from actually having normal vision; or
to put it differently, how do you know that you aren't blind and
merely deluded about being able to see?


--
Stathis Papaioannou

Craig Weinberg

unread,
Jul 19, 2011, 6:13:35 PM7/19/11
to Everything List
I think there could be differences in how vision is perceived if all
of the visual cortex lacked DNA, even if the neurons of the cortex
exhibited superficial evidence of normal connectivity. A person could
be dissociated from the images they see, feeling them to be
meaningless or unreal, seen as if in third person or from malicious
phantom/alien eyeballs. Maybe it would be more subtle...a sensation of
otherhanded sight, or sight seeming to originate from a place behind
the ears rather than above the nose. The non-DNA vision could be
completely inaccessible to the conscious mind, a psychosomatic/
hysterical blindness, or perhaps the qualia would be different,
unburdened by DNA, colors could seem lighter, more saturated like a
dream. The possibilities are endless. The only way to find out is to
do experiments.

DNA may not play a direct role in neuronal to neuronal interaction,
but the same could be said of perception itself. We have nothing to
show that perception is the necessary result of neuronal interaction.
The same interactions could exist in a simulation without any kind of
perceived universe being created somewhere. Just because the behavior
of neurons correlates with perception doesn't mean that their behavior
alone causes perception. Materials matter. A TV set made out of
hamburger won't work.

What I'm trying to say is that the sensorimotive experience of matter
is not limited to the physical interior of each component of a cell or
molecule, but rather it is a completely other, synergistic topology
which is as diffuse and experiential as the component side is discrete
and observable. There is a functional correlation, but that's just
where the two topologies intersect. Many minor physical changes to the
brain can occur without any noticeable differences in perception -
sometimes major changes, injuries, etc. Major changes in the psyche
can occur without any physical precipitate - reading a book may
unleash a flood of neurotransmitters but the cause is semantic, not
biochemical.

What we don't know is what levels of our human experience are
essential and which ones may be vestigial or redundant. We don't know
what the qualitative content of the individual neuron signals are,
whether they contribute to a high level feeling upstream or whether
that contribution requires a low level experience to be amplified. If
a cell has no DNA, maybe it feels distress and that feeling is
amplified in the aggregate signals.

On Jul 19, 7:26 am, Stathis Papaioannou <stath...@gmail.com> wrote:

Jason Resch

unread,
Jul 19, 2011, 8:59:22 PM7/19/11
to everyth...@googlegroups.com
On Tue, Jul 19, 2011 at 5:13 PM, Craig Weinberg <whats...@gmail.com> wrote:
I think there could be differences in how vision is perceived if all
of the visual cortex lacked DNA, even if the neurons of the cortex
exhibited superficial evidence of normal connectivity. A person could
be dissociated from the images they see, feeling them to be
meaningless or unreal, seen as if in third person or from malicious
phantom/alien eyeballs. Maybe it would be more subtle...a sensation of
otherhanded sight, or sight seeming to originate from a place behind
the ears rather than above the nose. The non-DNA vision could be
completely inaccessible to the conscious mind, a psychosomatic/
hysterical blindness, or perhaps the qualia would be different,
unburdened by DNA, colors could seem lighter, more saturated like a
dream. The possibilities are endless. The only way to find out is to
do experiments.

So would the person dissociated from these images, or feeling them meaningless or unlreal, etc., ever report these different feelings?  Remember, nerves control movement of the vocal cords, if the neural network was unaffected and its operation remained the same all outwardly visible behavior would also be the same.  The person could not report any differences with their sense of vision, nor would other parts of their brain (such as those of thought, or introspection, etc.) have any indication that the nerves in the visual cortex has been modified (so long as they continued to send the right signals at the right times).

 

DNA may not play a direct role in neuronal to neuronal interaction,
but the same could be said of perception itself. We have nothing to
show that perception is the necessary result of neuronal interaction.

All inputs to the brain are the result of neuronal interaction, as are all outputs.  Neurons are affected by other neurons.

Now if I present an apple to a person, and I ask "What is this?" and the person reports "An apple." that is an example of perception. 

In theory, one could trace the nerve signals from the optic and auditory nerves all the way to the nerves controlling the vocal cords.  For perception to not be the result of neuronal interaction, you would need to find some point between the auditory and visual inputs and the verbal outputs where something besides other nerves are controlling or affecting the behavior of nerves.

Do you have any proposal for what this thing might be?
 
The same interactions could exist in a simulation without any kind of
perceived universe being created somewhere. Just because the behavior
of neurons correlates with perception doesn't mean that their behavior
alone causes perception. Materials matter. A TV set made out of
hamburger won't work.

Humans can make TV sets using cathode ray tubes, liquid crystal displays, projection screens, plasma display panels, and so on.  Obviously material does not matter for making a TV set, what is important is the functions and behaviors of the components.  So long as the components allow emission of light at certain frequencies at specific locations on a grid it could be used to construct a television set.
 

What I'm trying to say is that the sensorimotive experience of matter
is not limited to the physical interior of each component of a cell or
molecule, but rather it is a completely other, synergistic topology
which is as diffuse and experiential as the component side is discrete
and observable. There is a functional correlation, but that's just
where the two topologies intersect. Many minor physical changes to the
brain can occur without any noticeable differences in perception -
sometimes major changes, injuries, etc. Major changes in the psyche
can occur without any physical precipitate - reading a book may
unleash a flood of neurotransmitters but the cause is semantic, not
biochemical.

The idea that two functionally equivalent minds made out of different material could determine a difference is contrary to the near universally accepted Church-Turing thesis.  A result of the thesis is that it is not possible for a process to determine its ultimate implementation.  This is the technology that allows one to play old atari or nintendo games on modern PCs, despite the completely different hardware and architecture.  From the perspective of the old Nintendo game, it is running on a Nintendo console, it has no way to determine it is running on a Dell laptop running Windows.  Similarly, if the mind is a process, it in principle, has no way of know whether it is implemented by a wet brain, or a cluster of super computers.


Jason

Craig Weinberg

unread,
Jul 20, 2011, 8:08:07 AM7/20/11
to Everything List
>So would the person dissociated from these images, or feeling them
>meaningless or unlreal, etc., ever report these different feelings?
>Remember, nerves control movement of the vocal cords, if the neural network
>was unaffected and its operation remained the same all outwardly visible
>behavior would also be the same. The person could not report any
>differences with their sense of vision, nor would other parts of their brain
>(such as those of thought, or introspection, etc.) have any indication that
>the nerves in the visual cortex has been modified (so long as they continued
>to send the right signals at the right times).

I'm saying that without DNA in the neurons, or something which
functions exactly as DNA, it may not be possible to satisfy the given
that the neural network is unaffected. It's all a matter of what the
substitution level is. If you replaced water with heavy water, it's
not exactly the same thing. If you have something that acts like water
in all ways, it's nothing but water. If you have a brain made of
neurons that are not neurons, you have something other than a brain to
one degree or another, depending on the exact difference. If you are
stating as a given that there is no difference between the replacement
brain from a biological brain, then the replacement brain is nothing
but a biological brain.

>All inputs to the brain are the result of neuronal interaction, as are all
>outputs. Neurons are affected by other neurons.
>

I think that 'the brain' is neuronal interaction (and intracellular
interaction, molecular interaction). It's inputs and outputs are with
the outside world of physical sense and the inside world of semantic
sense. The brain is the abacus, storing, changing, and organizing
patterns, but the experience is felt through the brain, not as a
consequence of the brain's functionality. The functionality of course
determines access to what patterns can be accessed from the exterior
by the interior and vice versa, but it is the interior sense of the
brain as a whole which is the user(s) of the computer.

>Now if I present an apple to a person, and I ask "What is this?" and the
>person reports "An apple." that is an example of perception.
>
>In theory, one could trace the nerve signals from the optic and auditory
>nerves all the way to the nerves controlling the vocal cords. For
>perception to not be the result of neuronal interaction, you would need to
>find some point between the auditory and visual inputs and the verbal
>outputs where something besides other nerves are controlling or affecting
>the behavior of nerves.

The perception is the result of the apple first. Of the properties of
the universe which allow sense to be propagated from apple to optic
nerve to visual cortex. From the outside looking in, perception is
incredibly complex. From the inside looking out, it's very simple.
Pain is simple. We are complex so our pain is mechanically achieved in
a relatively complex way, but any living organism probably has some
version of a pain-like experience. It's as elemental as ATP or DNA. We
can't observe it from the outside of course, because the interior
universe is inumerable private reality tunnels; the polar opposite of
the public unified topology of the exterior.

>Humans can make TV sets using cathode ray tubes, liquid crystal displays,
>projection screens, plasma display panels, and so on. Obviously material
>does not matter for making a TV set, what is important is the functions and
>behaviors of the components. So long as the components allow emission of
>light at certain frequencies at specific locations on a grid it could be
>used to construct a television set.

Of course material matters. There is a narrow range of materials which
we can feasibly make a TV set out of. We can't make a TV set out of
hamburger because hamburger cannot be made into components that do the
same thing as semiconductors. You're also conflating TV set with any
two dimensional display, which is not what we're talking about. We
very well could genetically engineer a brain, or biologically engineer
a brain, but I'm saying that we cannot semiotically engineer a brain
out of inorganic matter and expect it to be able to feel what
organisms feel. It's just going to be a sculpture of a brain that
behaves like a brain from the outside, but it can only play DVDs for
us. It has no user.

Craig

On Jul 19, 8:59 pm, Jason Resch <jasonre...@gmail.com> wrote:

Stathis Papaioannou

unread,
Jul 20, 2011, 9:09:41 AM7/20/11
to everyth...@googlegroups.com
On Wed, Jul 20, 2011 at 10:08 PM, Craig Weinberg <whats...@gmail.com> wrote:
>>So would the person dissociated from these images, or feeling them
>>meaningless or unlreal, etc., ever report these different feelings?
>>Remember, nerves control movement of the vocal cords, if the neural network
>>was unaffected and its operation remained the same all outwardly visible
>>behavior would also be the same.  The person could not report any
>>differences with their sense of vision, nor would other parts of their brain
>>(such as those of thought, or introspection, etc.) have any indication that
>>the nerves in the visual cortex has been modified (so long as they continued
>>to send the right signals at the right times).
>
> I'm saying that without DNA in the neurons, or something which
> functions exactly as DNA, it may not be possible to satisfy the given
> that the neural network is unaffected. It's all a matter of what the
> substitution level is. If you replaced water with heavy water, it's
> not exactly the same thing. If you have something that acts like water
> in all ways, it's nothing but water. If you have a brain made of
> neurons that are not neurons, you have something other than a brain to
> one degree or another, depending on the exact difference. If you are
> stating as a given that there is no difference between the replacement
> brain from a biological brain, then the replacement brain is nothing
> but a biological brain.

The requirement is that the artificial neurons interact with the
biological neurons in the normal way, so that the biological neurons
can't tell that they are imposters. This is a less stringent
requirement than making artificial neurons that are indistinguishable
from biological neurons under any test whatsoever. In the example I
gave before, removing the DNA from a neuron would at least for a few
minutes continue behaving normally so the surrounding neurons would
not detect that anything had changed, whereas an electron micrograph
might easily show the difference.


-- Stathis Papaioannou

Craig Weinberg

unread,
Jul 20, 2011, 2:40:50 PM7/20/11
to Everything List
Chickens can walk around for a while without a head also. It doesn't
mean that air is a viable substitute for a head, and it doesn't mean
that the head isn't producing a different quality of awareness than it
does under typical non-mortally wounded conditions.

On Jul 20, 9:09 am, Stathis Papaioannou <stath...@gmail.com> wrote:

meekerdb

unread,
Jul 20, 2011, 3:07:41 PM7/20/11
to everyth...@googlegroups.com
On 7/20/2011 11:40 AM, Craig Weinberg wrote:
> Chickens can walk around for a while without a head also. It doesn't
> mean that air is a viable substitute for a head, and it doesn't mean
> that the head isn't producing a different quality of awareness than it
> does under typical non-mortally wounded conditions.
>
>

No, but it means the chicken head isn't necessary to walking - just like
DNA isn't necessary to consciousness.

Brent

Craig Weinberg

unread,
Jul 20, 2011, 5:59:49 PM7/20/11
to Everything List
What does consciousness require?

meekerdb

unread,
Jul 20, 2011, 6:14:53 PM7/20/11
to everyth...@googlegroups.com
On 7/20/2011 2:59 PM, Craig Weinberg wrote:
> What does consciousness require?
>

Interaction with the world. Information processing. Memory. A point
of view; i.e. model of the world including self. Purpose/values.

Brent

Stathis Papaioannou

unread,
Jul 20, 2011, 6:58:49 PM7/20/11
to everyth...@googlegroups.com
On Thu, Jul 21, 2011 at 4:40 AM, Craig Weinberg <whats...@gmail.com> wrote:
> Chickens can walk around for a while without a head also. It doesn't
> mean that air is a viable substitute for a head, and it doesn't mean
> that the head isn't producing a different quality of awareness than it
> does under typical non-mortally wounded conditions.

I think you have failed to address the point made by several people so
far, which is that if the replacement neurons can interact with the
remaining biological neurons in a normal way, then it is not possible
for there to be a change in consciousness. The important thing is
**behaviour of the replacement neurons from the point of view of the
biological neurons**.


--
Stathis Papaioannou

Craig Weinberg

unread,
Jul 20, 2011, 7:33:48 PM7/20/11
to Everything List
Sounds like a fancy cash register to me.

Craig Weinberg

unread,
Jul 20, 2011, 7:44:15 PM7/20/11
to Everything List
Since it's not possible to know what the point of view of biological
neurons would be, we can't rule out the contents of the cell. You
can't presume to know that behavior is independent of context. If you
consider the opposite scenario, at what point do you consider a
microelectronic configuration conscious? How many biological neurons
does it take added to a computer before it has it's own agenda?

On Jul 20, 6:58 pm, Stathis Papaioannou <stath...@gmail.com> wrote:

Craig Weinberg

unread,
Jul 20, 2011, 7:51:17 PM7/20/11
to Everything List
Or, imagine you were to replace a city with empty cars that drive the
streets following sophisticated models of urban traffic. Is a group of
empty buildings that produce empty cars which drive around the streets
convincingly a city?

On Jul 20, 6:58 pm, Stathis Papaioannou <stath...@gmail.com> wrote:

meekerdb

unread,
Jul 20, 2011, 9:02:47 PM7/20/11
to everyth...@googlegroups.com
On 7/20/2011 4:33 PM, Craig Weinberg wrote:
> Sounds like a fancy cash register to me.
>

Better than magic topology.

Brent

meekerdb

unread,
Jul 20, 2011, 9:06:27 PM7/20/11
to everyth...@googlegroups.com
On 7/20/2011 4:44 PM, Craig Weinberg wrote:
> Since it's not possible to know what the point of view of biological
> neurons would be, we can't rule out the contents of the cell.

A neuron doesn't see anything. They don't have a "point of view".

> You
> can't presume to know that behavior is independent of context.

If behavoir is independent of context it isn't even intelligent, much
less conscious.


> If you
> consider the opposite scenario, at what point do you consider a
> microelectronic configuration conscious? How many biological neurons
> does it take added to a computer before it has it's own agenda?
>

That's like asking how many NP junctions have to added to make a
computer. It's a matter of organization, not just numbers.

Brent

Stathis Papaioannou

unread,
Jul 20, 2011, 10:12:15 PM7/20/11
to everyth...@googlegroups.com
On Thu, Jul 21, 2011 at 9:44 AM, Craig Weinberg <whats...@gmail.com> wrote:
> Since it's not possible to know what the point of view of biological
> neurons would be, we can't rule out the contents of the cell. You
> can't presume to know that behavior is independent of context. If you
> consider the opposite scenario, at what point do you consider a
> microelectronic configuration conscious? How many biological neurons
> does it take added to a computer before it has it's own agenda?

I think you're still missing the point. Forget about consciousness for
the moment and consider only the mechanical aspect of the brain. By
analogy consider a car: we replace parts that wear out with new parts
that function equivalently. If we replace the sparkplugs as long as
the new ones screw in properly and have the right electrical
properties it doesn't matter if they are a different shape or colour.
The proof of this is that car is observed to function normally under
all circumstances. Similarly with the brain, we replace some existing
neurons with modified or artificial neurons that function identically.
No doubt it would be difficult to make such neurons, but *provided*
they can be made and appropriately installed, the behaviour of the
entire brain will be the same, and *therefore* the consciousness will
be the same. Do you agree with this, or not?


--
Stathis Papaioannou

Bruno Marchal

unread,
Jul 21, 2011, 5:48:38 AM7/21/11
to everyth...@googlegroups.com

And interfacing biological neurons with non biological circuits is not
sci.fi., nowadays.

http://www.youtube.com/watch?v=1-0eZytv6Qk&feature=related

http://www.youtube.com/watch?v=1QPiF4-iu6g&feature=fvwrel

http://www.youtube.com/watch?v=-EvOlJp5KIY

This is NOT a proof, nor even strong evidences for computationalism,
but it is strong evidence that humans will believe in comp, and
practice it, no matter what.

Bruno

http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Jul 21, 2011, 5:55:05 AM7/21/11
to everyth...@googlegroups.com

On 21 Jul 2011, at 00:14, meekerdb wrote:

> On 7/20/2011 2:59 PM, Craig Weinberg wrote:
>> What does consciousness require?
>>
>
> Interaction with the world.

But what is a world? Also, assuming computationalism, you need only to
believe that you interact with a "world/reality", whatever that is,
like in dream. If not you *do* introduce some magic in both
consciousness and world.

> Information processing. Memory. A point of view; i.e. model of the
> world including self. Purpose/values.

OK. Although "conscious purpose" is already a high form of
consciousness, it might be self-consciousness.

Bruno

http://iridia.ulb.ac.be/~marchal/

Craig Weinberg

unread,
Jul 21, 2011, 6:25:40 AM7/21/11
to Everything List
>> Sounds like a fancy cash register to me.

>Better than magic topology.

A fictive topology explains the desire for magic, but a cash register
has no desire. How complicated does the cash register have to be
before it invents the idea of magic? If the cash register reproduces
itself, would baby registers be more imaginative than adults?

>A neuron doesn't see anything. They don't have a "point of view".

How many neurons do there have to be before they collectively develop
one? Where does it come from?

>If behavoir is independent of context it isn't even intelligent, much
>less conscious.

So then why are you so sure that it cannot depend on a specific
molecular context?

>That's like asking how many NP junctions have to added to make a
>computer. It's a matter of organization, not just numbers.

If a towel was sufficiently long, and existed for a long enough time,
would it eventually become conscious If it were tied enough knots in
just the right organization? How about a game? If we made a game with
enough rules and dice throws, would the game itself eventually become
a conscious entity?

Craig

Craig Weinberg

unread,
Jul 21, 2011, 6:46:21 AM7/21/11
to Everything List
It depends entirely on the degree to which the neurons are modified or
artificial. If you replace some parts of a care with ones made out of
chewing gum or ice, they may work for a while under particular
conditions, temperatures, etc. Think of how simple an artificial heart
is by comparison to even a single neuron, let alone a brain. It's a
pump with a regular beat. Yet, the longest anyone has survived with
one is seven years.

All I'm saying is that for something to function identically to a
neuron, it must in all likelihood be a living organism, and to be a
living organism, it's likely that it needs to be composed of complex
organic molecules. Not due to the specific magic of organic
configurations but due to the extraordinary level of fidelity required
to reproduce the tangible feelings produced by living organisms, and
the critical role those feelings likely play in the aggregation of
what we consider to be consciousness. Consciousness is made of
feelings themselves, and their behaviors, their internal consistency
and not just the neurological behaviors which are associated with
them. It is a first person experience, completely undetectable in
third person.

Or, to use your car analogy, would the replaceable parts of a car
include a driver? Are all drivers capable of driving the car in the
same way? A blind person can physically drive the car, push the
pedals, turn the wheel. Can a blind or unconscious nucleus drive a
neuron?

Craig

On Jul 20, 10:12 pm, Stathis Papaioannou <stath...@gmail.com> wrote:

Craig Weinberg

unread,
Jul 21, 2011, 6:50:45 AM7/21/11
to Everything List
I don't have a problem with living neurological systems extending
their functionality with mechanical prosthetics, it's the other way
around that is more of an issue. People driving cars doesn't mean cars
driving human minds.

Craig Weinberg

unread,
Jul 21, 2011, 7:02:27 AM7/21/11
to Everything List
Consciousness is nothing more than the elaborated experience of
feeling. The world it interacts with does not have to make any
objective sense, requires no information processing or memory, purpose
or value. Pain is consciousness. It need not contain any information
beyond a projection of the possibility of it's cessation. It is a self-
explanatory, innate, first person experience that doesn't need any
complex logic behind it, nor will any amount of logic necessarily
alleviate it directly. You can't always reason with pain. Pain cannot
be simulated quantitatively in any way. There is no equation, game, or
purely inorganic quantitative system that has ever felt pain or will
ever feel what we know as pain. Without first hand experience of the
difference between pain and pleasure, there can be no animal level of
consciousness.

Craig

Bruno Marchal

unread,
Jul 21, 2011, 9:31:09 AM7/21/11
to everyth...@googlegroups.com

On 21 Jul 2011, at 12:50, Craig Weinberg wrote:

> I don't have a problem with living neurological systems extending
> their functionality with mechanical prosthetics, it's the other way
> around that is more of an issue. People driving cars doesn't mean cars
> driving human minds.

Sure, but we do both: robots with neurons, and animals, including
humans, with the brain partially replaced by artificial neurons.
Anyway, if you think molecules are needed, that is, that the level of
substitution includes molecular activity, this too can be emulated by
a computer. The only way to negate computationalism consists in
pretending there is some NON Turing-emulable activity going on in the
brain, and relevant for consciousness. In that case, there is no
possible level of digital substitution.

Note that all physical phenomena known today are Turing emulable,
even, in some sense, quantum indeterminacy (in the QM without
collapse) where the indeterminacy is a first person view of a
digitalisable self-multiplication experiment.

All what consciousness (and matter) needs is a sufficiently rich
collection of self-referential relations. It happens that the numbers,
by the simple laws of addition and multiplication provides already
just that. Adding some ontological elements can only make the mind
body problem more complex to even just formulate.

Bruno

Bruno Marchal

unread,
Jul 21, 2011, 10:03:16 AM7/21/11
to everyth...@googlegroups.com

On 21 Jul 2011, at 13:02, Craig Weinberg wrote:

>>
>>> On 7/20/2011 2:59 PM, Craig Weinberg wrote:
>>>> What does consciousness require?
>>
>>> Interaction with the world.
>>
>> But what is a world? Also, assuming computationalism, you need only
>> to
>> believe that you interact with a "world/reality", whatever that is,
>> like in dream. If not you *do* introduce some magic in both
>> consciousness and world.
>>
>>> Information processing. Memory. A point of view; i.e. model of the
>>> world including self. Purpose/values.
>>
>> OK. Although "conscious purpose" is already a high form of
>> consciousness, it might be self-consciousness.
>
> Consciousness is nothing more than the elaborated experience of
> feeling.

OK.


> The world it interacts with does not have to make any
> objective sense, requires no information processing or memory, purpose
> or value.

OK.


> Pain is consciousness.

OK.


> It need not contain any information
> beyond a projection of the possibility of it's cessation.

OK.

> It is a self-
> explanatory, innate, first person experience

OK.

> that doesn't need any
> complex logic behind it,

Why? This is just like saying "we can't explain it". I am OK with
that, but then I look for better definitions and assumptions, with the
goal of at least finding an explanation of why it seems like that, or
why there is no explanation. Without this, it is like invoking the
will of God, and adding "don't search for an explanation".

> nor will any amount of logic necessarily
> alleviate it directly.

I agree. Most people will say that logic will just add a layer of
headache :)
Still, we need logic, and *some* theory to explain why we cannot
explain directly the first person sensations.

> You can't always reason with pain.

Right. It is not a reason type of thing. But there might be a (meta)
reason for that.


> Pain cannot
> be simulated quantitatively in any way.

How do you know?

> There is no equation, game, or
> purely inorganic quantitative system that has ever felt pain or will
> ever feel what we know as pain.

You remind me of the Spanish christians arguing that south american
indians have no souls. You can rape and enslave them at will: it is
not a sin! (To be sure they *did* eventually conclude, at the
Valladolid meeting, that they have a soul, so that it was necessary to
convert them to save them from hell).
(That's why the "spirit" of the Salvia divinorum plant became known as
the Virgin Mary!)

> Without first hand experience of the
> difference between pain and pleasure, there can be no animal level of
> consciousness.

I am OK with this. But I do think plausible that you can emulate
digitally first hand experiences of pain and pleasure. Then 'real'
human-like pain, which can last for a time, will need the whole
(arithmetical) truth to be stable on its many 'futures'. Our first
person experiences are non computably distributed on an infinite
structure, but that is a consequence of its digitalness at some level.

Bruno

>
> On Jul 21, 5:55 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>> On 21 Jul 2011, at 00:14, meekerdb wrote:
>>
>>> On 7/20/2011 2:59 PM, Craig Weinberg wrote:
>>>> What does consciousness require?
>>
>>> Interaction with the world.
>>
>> But what is a world? Also, assuming computationalism, you need only
>> to
>> believe that you interact with a "world/reality", whatever that is,
>> like in dream. If not you *do* introduce some magic in both
>> consciousness and world.
>>
>>> Information processing. Memory. A point of view; i.e. model of the
>>> world including self. Purpose/values.
>>
>> OK. Although "conscious purpose" is already a high form of
>> consciousness, it might be self-consciousness.
>>
>> Bruno
>>
>> http://iridia.ulb.ac.be/~marchal/
>

Craig Weinberg

unread,
Jul 21, 2011, 10:08:55 AM7/21/11
to Everything List
>if you think molecules are needed, that is, that the level of
>substitution includes molecular activity, this too can be emulated by
>a computer

But it can only be emulated in a virtual environment interfacing with
a computer literate human being though. A real mouse will not be able
to live on virtual cheese. Why can't consciousness be considered
exactly the same way, as an irreducible correlate of specific meta-
meta-meta-elaborations of matter?

>All what consciousness (and matter) needs is a sufficiently rich
>collection of self-referential relations. It happens that the numbers,
>by the simple laws of addition and multiplication provides already
>just that. Adding some ontological elements can only make the mind
>body problem more complex to even just formulate.

Information is not consciousness. Energy is the experience of being
informed and informing, but it is not information. This is why a brain
must be alive and conscious (not in a coma) to be informed or inform,
and why a computer must be turned on to execute programs, or a
mechanical computing system has to have kinetic initialization, etc.
The path that energy takes determines the content of the experience to
some extent, but it is the physical nature of the materials through
which the continuous sense of interaction occurs which determine the
quality or magnitude of possible qualitative elaboration (physical,
chemo, bio, zoo-physio, neuro, cerebral) of that experience. Physical
will take you to detection, chemo to sense, bio to feeling, zoo to
emotion, neuro to cognition, cerebral to full abstraction (colloquial
terms here, not asserting a formal taxonomy). All are forms of
awareness. Consciousness implies awareness of awareness which maybe
comes at the neuro or cerebral level, maybe lower? It has nothing to
do with the complexity of the path that the energy takes. Complexity
is an experience, not a discrete ontological condition.

>Adding some ontological elements can only make the mind
>body problem more complex to even just formulate.

This makes me think that you are sentimental about protecting the
simplicity of an abstract formula, rather than faithfully representing
the problem. I'm not especially interested in the 'easy' problem of
consciousness. It's a worthwhile problem, to be sure, it's just not my
thing. I do think, however, that if we can accurately describe the
pattern of what the hard problem seems to arise from, it may have
implications for both the easy and hard problems. At worst, my view
limits the aspirations of inorganic materials to simulate
consciousness, but I don't see that as anything more than an
identification of how the cosmos works. We don't want to create
consciousness, we can do that already by reproducing. We want an
omnipotent glove for the hand of consciousness that we already have.
That seems easier to accomplish if we are not convincing ourselves
that feelings must be numbers.

Craig

meekerdb

unread,
Jul 21, 2011, 11:59:28 AM7/21/11
to everyth...@googlegroups.com
On 7/21/2011 2:55 AM, Bruno Marchal wrote:
>
> On 21 Jul 2011, at 00:14, meekerdb wrote:
>
>> On 7/20/2011 2:59 PM, Craig Weinberg wrote:
>>> What does consciousness require?
>>>
>>
>> Interaction with the world.
>
> But what is a world? Also, assuming computationalism, you need only to
> believe that you interact with a "world/reality", whatever that is,
> like in dream. If not you *do* introduce some magic in both
> consciousness and world.


So I need to believe some magic or I have to introduce some magic. That
seems a distinction without a difference.

>
>
>
>> Information processing. Memory. A point of view; i.e. model of the
>> world including self. Purpose/values.
>
> OK. Although "conscious purpose" is already a high form of
> consciousness, it might be self-consciousness.
>
> Bruno

I think there are different kinds and levels of consciousness,
awareness, cogitation. Purpose need not be something reflected on.
Even simple animals have purpose hardwired in their genes.

Brent

Craig Weinberg

unread,
Jul 21, 2011, 6:42:54 PM7/21/11
to Everything List
>> that doesn't need any
>> complex logic behind it,
>
>Why? This is just like saying "we can't explain it". I am OK with
>that, but then I look for better definitions and assumptions, with the
>goal of at least finding an explanation of why it seems like that, or
>why there is no explanation. Without this, it is like invoking the
>will of God, and adding "don't search for an explanation".

Right, I totally agree. I just think there's a significant chance that
we will be looking in the wrong place if we restrict ourselves to
digital-analytic logic. Not saying that we should abandon all hope of
producing insights from that area too, I'm just wanting to know if
anyone has any objections to me planting a flag on this new continent
to explore.

My thinking at this point is that why it seems so difficult to explain
is that :

1) since we, the subjective observer is in my opinion a phenomena of a
category which is identical to qualia, the sameness leads to an
ontological problem of not being able to examine qualia from outside
of the realm of qualia.

2) the nature of qualitative phenomenon itself is the opposite
(interior, perhaps trans-terior) of quantitative phenomenology so we
may have to control our scientific impulses toward deterministic
theory to allow for more flexible and intuitive apprehensions to
embrace the nuances of how it works.

3) the qualitative principle is identical to privacy in an ontological
sense of being self-sequestering from public exterior access. The
privacy itself is what defines the locus of qualitative phenomena.

4) this 'stuff' may be ultimately originating through non-local, a-
temporal axiom of the Singularity, so that we may not only have
restricted access by virtue of our own separation from each other, but
qualia itself may somehow present the experience of entities which we
would consider to be in the future as well as the past.

As far as 3 goes, we may actually be able to overcome our separateness
using technology. To be able to experiment with a neural prosthetic
which could extend our visual cortex to access multiple visual systems
- insect, bird, dog, etc.. To be able to record and play back neural
activity records.. These things are entirely possible and I think
would be hugely informative. Maybe we can break some qualia codes that
way, tweak our sensation retroperceptually, etc.

1 and 2 are just a matter of breaking habits of how we think about
these things. That's the fun part. Next time you turn on the light in
your room, look at what it is that you see, not as a flood of
invisible photon stuff, but as eavesdropping on your eyeball's
conversation with the illuminated surfaces of the room. The light
isn't being added to your face or the room, it's all lighting itself
up according to the perceptual-relativistic protocols of illumination.
The surface of everything is lit up from the inside, or the trans-
terior side in the presence of a sufficiently excited quantity of
matter. It works just like a painting or computer graphic, in the
sense that it is the surface itself which is changing and not some
intangible light juice spraying all over the place.

>> Pain cannot
>> be simulated quantitatively in any way.
>
>How do you know?

I don't, but I think if it could, then you would be asking my theory
directly how it knows instead of me.

>You remind me of the Spanish christians arguing that south american
>indians have no souls. You can rape and enslave them at will: it is
>not a sin! (To be sure they *did* eventually conclude, at the
>Valladolid meeting, that they have a soul, so that it was necessary to
>convert them to save them from hell).
>(That's why the "spirit" of the Salvia divinorum plant became known as
>the Virgin Mary!)

That's the tragic irony. Turned out that they themselves were the ones
who had no souls. Oops.
I'm only taking a hard line on this because I think that it's in such
contradistinction to the momentum of civilized thought. A sufficiently
evolved card game could be pretty damn impressive, and if we invest
our own feeling into it, there is arguably new feelings that we
experience as a result, I just don't think that what we see as the
game can have feelings that we can realize. We can't rule out though
that anything we experience as having no feeling has a private
dimension that may see us as having no feeling. It just gets a bit too
psychedelic (salviadelic?) to actually implement that level of animism
practically, don't you think?

>I am OK with this. But I do think plausible that you can emulate
>digitally first hand experiences of pain and pleasure. Then 'real'
>human-like pain, which can last for a time, will need the whole
>(arithmetical) truth to be stable on its many 'futures'.

I think you can emulate first and experiences only in a system capable
of subjectively experiencing them. We certainly should be able emulate
the output of some kind of pain or pleasure and input it into another
nervous system. Simple record and playback through an analog or
digital medium. That's really one of my earliest and strongest dreams
would be to be involved in the orchestration of full sensory
experiences, brain-direct.

>Our first
>person experiences are non computably distributed on an infinite
>structure, but that is a consequence of its digitalness at some level.

There was a lot made of the perceived difference in digital music when
CDs first came out, in the audiophile communities particularly. I do
think that a subtle difference can be detected but hard to know
whether it's the digital nature itself or the processing, mixing,
playback equipment, confirmation bias, etc. Digital music seems
harsher, more sibilant and shallow on the percussion. It doesn't
bother me much, but I think there could be a legitimate, if subtle
difference stemming from the pure conversion of analog waveforms to
digital samples.

Craig
http://s33light.org

Jason Resch

unread,
Jul 21, 2011, 7:16:57 PM7/21/11
to everyth...@googlegroups.com
On Thu, Jul 21, 2011 at 5:42 PM, Craig Weinberg <whats...@gmail.com> wrote:

There was a lot made of the perceived difference in digital music when
CDs first came out, in the audiophile communities particularly. I do
think that a subtle difference can be detected but hard to know
whether it's the digital nature itself or the processing, mixing,
playback equipment, confirmation bias, etc. Digital music seems
harsher, more sibilant and shallow on the percussion. It doesn't
bother me much, but I think there could be a legitimate, if subtle
difference stemming from the pure conversion of analog waveforms to
digital samples.


Whether or not a nerve cell in your cochlea fires or not is digital, as is the number of ions it releases when it fires.  Thus, even when listening to analogue recordings, by the time it reaches your brain the signal has been digitized.  Digital representations today technology may have compression artifacts or be sampled at rates well below the ability of the human ear to discern, but there is some level of digital fidelity at which it would be impossible for your ear to be able to distinguish.

Jason

Craig Weinberg

unread,
Jul 21, 2011, 8:35:50 PM7/21/11
to Everything List
>Whether or not a nerve cell in your cochlea fires or not is digital, as is
>the number of ions it releases when it fires. Thus, even when listening to
>analogue recordings, by the time it reaches your brain the signal has been
>digitized. Digital representations today technology may have compression
>artifacts or be sampled at rates well below the ability of the human ear to
>discern, but there is some level of digital fidelity at which it would be
>impossible for your ear to be able to distinguish.

That description only takes into account the phenomena of sense from
the outside in, where each physical tissue responds in it's own way to
the stimulation of the other tissues or fluids, and the sense of the
pattern is transduced from one physical form to another. From a truly
objective point of view, the idea of there being a 'signal' continuity
is a third person analytical conceit. In reality there are just
different materials responding to each other in a way which is
ultimately meaningful to us. There is no physical signal there, it's
just an event being shared sequentially amongst materials.

If you look at it from the inside out instead, the psyche is picking
up the analog modulation of the cilia, cochlea as a whole, and to some
extent the gestalt sense of the entire aural, physical event external
to the ear through the sensitivity of the auditory nerves. The entire
media path is collapsed, or as I say, cumulatively entangled, so that
the psyche is itself semantically altered to conform to the sense of
the sound event while preserving subtle traces of the entire
interstitial media path. This experiential description is every bit as
'real' as the outside in, and for most purposes much more relevant as
it is the signifying content of the sound that we care about, rather
than the a-signifying, generic form of it's transfer.

I agree there would be a level at which digital recording is
indistinguishable from analog recording, but I think that it's due to
the intentional gating of the sense through the psyche and media path
rather than the limitations of nerve cells firing. The nerve cells
themselves may experience a huge range of sensitivity which we have no
conscious access to - the cochlea, maybe even more. Talking about raw
sensation here, not depth/richness of interpretative qualia.

Craig
http://s33light.org

On Jul 21, 7:16 pm, Jason Resch <jasonre...@gmail.com> wrote:

Jason Resch

unread,
Jul 21, 2011, 9:28:01 PM7/21/11
to everyth...@googlegroups.com
On Thu, Jul 21, 2011 at 7:35 PM, Craig Weinberg <whats...@gmail.com> wrote:


I agree there would be a level at which digital recording is
indistinguishable from analog recording, but I think that it's due to
the intentional gating of the sense through the psyche and media path
rather than the limitations of nerve cells firing. The nerve cells
themselves may experience a huge range of sensitivity which we have no
conscious access to - the cochlea, maybe even more. Talking about raw
sensation here, not depth/richness of interpretative qualia.


Regardless of what the nerve cells experience individually, if it can't be communicated it to other nerve cells, it can't be talked about, thought about, or wondered about.

Jason

meekerdb

unread,
Jul 21, 2011, 10:11:48 PM7/21/11
to everyth...@googlegroups.com
On 7/21/2011 5:35 PM, Craig Weinberg wrote:
>> Whether or not a nerve cell in your cochlea fires or not is digital, as is
>> >the number of ions it releases when it fires. Thus, even when listening to
>> >analogue recordings, by the time it reaches your brain the signal has been
>> >digitized. Digital representations today technology may have compression
>> >artifacts or be sampled at rates well below the ability of the human ear to
>> >discern, but there is some level of digital fidelity at which it would be
>> >impossible for your ear to be able to distinguish.
>>
> That description only takes into account the phenomena of sense from
> the outside in, where each physical tissue responds in it's own way to
> the stimulation of the other tissues or fluids, and the sense of the
> pattern is transduced from one physical form to another. From a truly
> objective point of view, the idea of there being a 'signal' continuity
> is a third person analytical conceit. In reality there are just
> different materials responding to each other in a way which is
> ultimately meaningful to us.

Isn't that the definition of a physical signal.

> There is no physical signal there, it's
> just an event being shared sequentially amongst materials.
>
> If you look at it from the inside out instead, the psyche is picking
> up the analog modulation of the cilia, cochlea as a whole, and to some
> extent the gestalt sense of the entire aural, physical event external
> to the ear through the sensitivity of the auditory nerves. The entire
> media path is collapsed, or as I say, cumulatively entangled, so that
> the psyche is itself semantically altered to conform to the sense of
> the sound event while preserving subtle traces of the entire
> interstitial media path. This experiential description is every bit as
> 'real' as the outside in,

No it's not. It implies, for example, that replacing dysfunctional
cochlea by an electronic device that stimulated the auditory nerve would
not produce hearing - but it does. The entire media path (in which for
some reason you left out the sound waves and their source) consists of
separable physical components.

> and for most purposes much more relevant as
> it is the signifying content of the sound that we care about, rather
> than the a-signifying, generic form of it's transfer.
>
> I agree there would be a level at which digital recording is
> indistinguishable from analog recording, but I think that it's due to
> the intentional gating of the sense through the psyche and media path
> rather than the limitations of nerve cells firing. The nerve cells
> themselves may experience a huge range of sensitivity which we have no
> conscious access to - the cochlea, maybe even more. Talking about raw
> sensation here, not depth/richness of interpretative qualia.
>

What sense does it make to talk about sensations of our nerve cells
which we have no access to. Who does have access to them? If no one
does then in what sense are they "sensations"? Of course you may
speculate that each nerve cell itself experiences some sensation, and
each molecule in the nerve cell, and each quark in each atom, and the
atoms of the atmosphere that carry the sound wave - but you could also
speculate that pigs will fly. The question is, "What's the evidence?"

Brent

Stathis Papaioannou

unread,
Jul 21, 2011, 10:58:57 PM7/21/11
to everyth...@googlegroups.com

No doubt it would be technically difficult to make an artificial
replacement for a neuron in a different substrate, but there is no
theoretical reason why it could not be done, since there is no
evidence for any magical processes inside neurons. The argument is
that IF an artificial neuron could be made which would replicate the
behaviour of a biological neuron well enough to slot into position in
a brain unnoticed THEN the consciousness of that brain would be
unaffected. If not, a bizarre situation would arise where
consciousness could change or disappear (eg., going blind) without the
subject noticing. Can you address this particular point?


--
Stathis Papaioannou

Bruno Marchal

unread,
Jul 22, 2011, 4:58:37 AM7/22/11
to everyth...@googlegroups.com

On 21 Jul 2011, at 16:08, Craig Weinberg wrote:

>> if you think molecules are needed, that is, that the level of
>> substitution includes molecular activity, this too can be emulated by
>> a computer
>
> But it can only be emulated in a virtual environment interfacing with
> a computer literate human being though.

Why. That's begging the question.

> A real mouse will not be able
> to live on virtual cheese.

But a virtual mouse will (I will talk *in* the comp theory).

> Why can't consciousness be considered
> exactly the same way, as an irreducible correlate of specific meta-
> meta-meta-elaborations of matter?

What do you mean by matter? Primitive matter does not exist. A TOE has
to explain where the belief in matter comes from without assuming it.

>
>> All what consciousness (and matter) needs is a sufficiently rich
>> collection of self-referential relations. It happens that the
>> numbers,
>> by the simple laws of addition and multiplication provides already
>> just that. Adding some ontological elements can only make the mind
>> body problem more complex to even just formulate.
>
> Information is not consciousness. Energy is the experience of being
> informed and informing, but it is not information.

I agree.


> This is why a brain
> must be alive and conscious (not in a coma) to be informed or inform,
> and why a computer must be turned on to execute programs, or a
> mechanical computing system has to have kinetic initialization, etc.

Not at all. All you need are relative genuine relations. That does
explain both the origin of quanta and qualia, including the difference
of the quantitative and the qualitative.


> The path that energy takes determines the content of the experience to
> some extent, but it is the physical nature of the materials through
> which the continuous sense of interaction occurs which determine the
> quality or magnitude of possible qualitative elaboration (physical,
> chemo, bio, zoo-physio, neuro, cerebral) of that experience.


How?


> Physical
> will take you to detection, chemo to sense, bio to feeling, zoo to
> emotion, neuro to cognition, cerebral to full abstraction (colloquial
> terms here, not asserting a formal taxonomy).

You say so, but my point is that if you assume matter, your theory
needs very special form of infinities. Which one?


> All are forms of
> awareness. Consciousness implies awareness of awareness

That is self-consciousness.

> which maybe
> comes at the neuro or cerebral level, maybe lower? It has nothing to
> do with the complexity of the path that the energy takes. Complexity
> is an experience, not a discrete ontological condition.

You need infinities to make complexity an experience, and that is like
putting the horse behind the car.

>
>> Adding some ontological elements can only make the mind
>> body problem more complex to even just formulate.
>
> This makes me think that you are sentimental about protecting the
> simplicity of an abstract formula, rather than faithfully representing
> the problem.

I was mentioning the mind-body problem. No formula was involved. You
put infinities and uncomputability everywhere, where comp put it in
very special place with complete justification.

> I'm not especially interested in the 'easy' problem of
> consciousness.

Me neither.

> It's a worthwhile problem, to be sure, it's just not my
> thing. I do think, however, that if we can accurately describe the
> pattern of what the hard problem seems to arise from, it may have
> implications for both the easy and hard problems. At worst, my view
> limits the aspirations of inorganic materials to simulate
> consciousness,

That is vitalism. It fails to explain anything. It makes the problem
less tractable. It is similar to the God of the gap. Comp explains why
there is a gap. I am not sure you study the theory.

> but I don't see that as anything more than an
> identification of how the cosmos works. We don't want to create
> consciousness, we can do that already by reproducing. We want an
> omnipotent glove for the hand of consciousness that we already have.
> That seems easier to accomplish if we are not convincing ourselves
> that feelings must be numbers.

Comp explains completely why feelings are NOT numbers. You don't study
the theory, and you criticize only your own prejudice about numbers
and machines.

You can use non-comp, as you seem to desire, but then tell us what is
not Turing emulable in "organic matter"?

Bruno

Bruno Marchal

unread,
Jul 22, 2011, 5:11:59 AM7/22/11
to everyth...@googlegroups.com

On 21 Jul 2011, at 17:59, meekerdb wrote:

> On 7/21/2011 2:55 AM, Bruno Marchal wrote:
>>
>> On 21 Jul 2011, at 00:14, meekerdb wrote:
>>
>>> On 7/20/2011 2:59 PM, Craig Weinberg wrote:
>>>> What does consciousness require?
>>>>
>>>
>>> Interaction with the world.
>>
>> But what is a world? Also, assuming computationalism, you need only
>> to believe that you interact with a "world/reality", whatever that
>> is, like in dream. If not you *do* introduce some magic in both
>> consciousness and world.
>
>
> So I need to believe some magic or I have to introduce some magic.
> That seems a distinction without a difference.

With comp the only magic is 0, 1, 2, 3, + addition + multiplication.
With non-comp the magic is primitive matter + primitive physical laws
+ primitive consciousness + non intelligible links between all those
things.

>
>>
>>
>>
>>> Information processing. Memory. A point of view; i.e. model of
>>> the world including self. Purpose/values.
>>
>> OK. Although "conscious purpose" is already a high form of
>> consciousness, it might be self-consciousness.
>>
>> Bruno
>
> I think there are different kinds and levels of consciousness,
> awareness, cogitation. Purpose need not be something reflected on.
> Even simple animals have purpose hardwired in their genes.

OK. I was talking on the conscious purpose. Not on God, or Matter
"purpose".

Bruno

http://iridia.ulb.ac.be/~marchal/

Stephen P. King

unread,
Jul 22, 2011, 5:24:18 AM7/22/11
to everyth...@googlegroups.com
Hi Bruno and Craig,

On 7/22/2011 4:58 AM, Bruno Marchal wrote:
>
> On 21 Jul 2011, at 16:08, Craig Weinberg wrote:
>
>>> if you think molecules are needed, that is, that the level of
>>> substitution includes molecular activity, this too can be emulated by
>>> a computer
>>
>> But it can only be emulated in a virtual environment interfacing with
>> a computer literate human being though.
>
> Why. That's begging the question.
>
>

Bruno has a strong point here. So long as one is dealing with a system
that can be described such that that description can be turned into a
recipe to represent all aspects of the system, then it is, by definition
computable!


>
>> A real mouse will not be able
>> to live on virtual cheese.
>
> But a virtual mouse will (I will talk *in* the comp theory).

Virtual mice eat virtual cheese and get virtual calories from it! Be
careful that your not forcing a multi-leveled concept into a single
conceptual level.


>
>
>
>> Why can't consciousness be considered
>> exactly the same way, as an irreducible correlate of specific meta-
>> meta-meta-elaborations of matter?
>
> What do you mean by matter? Primitive matter does not exist. A TOE has
> to explain where the belief in matter comes from without assuming it.
>
>

OK, Bruno, would you stop saying that unless you explicitly explain what
you mean by "primitive matter"? The point that "A TOE has to explain
where the belief in matter comes from without assuming it" is very
important, though, but you might agree that that kind of multi-leveled
TOE is foreign to most people. Not many people consider that a Theory of
Everything must contain not only a representation of waht is observed
but also the means and methods of the observations there of, or else it
is not a theory of *Everything*. This actually makes the concept of a
TOE subject to Incompleteness considerations!

>
>>
>>> All what consciousness (and matter) needs is a sufficiently rich
>>> collection of self-referential relations. It happens that the numbers,
>>> by the simple laws of addition and multiplication provides already
>>> just that. Adding some ontological elements can only make the mind
>>> body problem more complex to even just formulate.
>>
>> Information is not consciousness. Energy is the experience of being
>> informed and informing, but it is not information.
>
> I agree.
>
>

Indeed!

>
>
>> This is why a brain
>> must be alive and conscious (not in a coma) to be informed or inform,
>> and why a computer must be turned on to execute programs, or a
>> mechanical computing system has to have kinetic initialization, etc.
>
> Not at all. All you need are relative genuine relations. That does
> explain both the origin of quanta and qualia, including the difference
> of the quantitative and the qualitative.
>

But Bruno, you are side-stepping the vital question of persistance and
transitivity in that notion of "genuine relations." One's TOE has to
account for the appearance of time, even it it is not primitive. It is
not enough to show that matter is not primitive, we have to show how the
image of an evolving matter universe is possible. So far we are
implying it via diamonds, but diamonds do not map in ways that are
necessary to code interactions.

>
>> The path that energy takes determines the content of the experience to
>> some extent, but it is the physical nature of the materials through
>> which the continuous sense of interaction occurs which determine the
>> quality or magnitude of possible qualitative elaboration (physical,
>> chemo, bio, zoo-physio, neuro, cerebral) of that experience.
>
>
> How?
>
>

Umm, Craig, no. Energy is defined by the path of events of the
interaction. This is why the word "action" is used. We have a notion of
least action which relates to the minimum configuration of a system, the
content of the experience *is* the "inside view" of the process that
strives always for that minimum.

>
>
>> Physical
>> will take you to detection, chemo to sense, bio to feeling, zoo to
>> emotion, neuro to cognition, cerebral to full abstraction (colloquial
>> terms here, not asserting a formal taxonomy).
>
> You say so, but my point is that if you assume matter, your theory
> needs very special form of infinities. Which one?
>
>

Could explain this necessity, Bruno?

>
>
>> All are forms of
>> awareness. Consciousness implies awareness of awareness
>
> That is self-consciousness.

Consciousness does not require a model of self that is integrated into
the content of consciousness, therefore consciousness is not reflexive
in the primitive sense.

>
>
>
>> which maybe
>> comes at the neuro or cerebral level, maybe lower? It has nothing to
>> do with the complexity of the path that the energy takes. Complexity
>> is an experience, not a discrete ontological condition.
>
> You need infinities to make complexity an experience, and that is like
> putting the horse behind the car.
>
>

Please explain this.

>
>>
>>> Adding some ontological elements can only make the mind
>>> body problem more complex to even just formulate.
>>
>> This makes me think that you are sentimental about protecting the
>> simplicity of an abstract formula, rather than faithfully representing
>> the problem.
>
> I was mentioning the mind-body problem. No formula was involved. You
> put infinities and uncomputability everywhere, where comp put it in
> very special place with complete justification.
>
>
>
>> I'm not especially interested in the 'easy' problem of
>> consciousness.
>
> Me neither.
>
>
>
>> It's a worthwhile problem, to be sure, it's just not my
>> thing. I do think, however, that if we can accurately describe the
>> pattern of what the hard problem seems to arise from, it may have
>> implications for both the easy and hard problems. At worst, my view
>> limits the aspirations of inorganic materials to simulate
>> consciousness,
>
> That is vitalism. It fails to explain anything. It makes the problem
> less tractable. It is similar to the God of the gap. Comp explains why
> there is a gap. I am not sure you study the theory.
>
>

OTOH, Bruno. one cannot gloss over the way that quantum logic is
non-distributive. Reducing all to combinators or numbers that do not
involve this seems doomed from the start. it is as if we dissolve
everything into a soup and say: See, Existence is soup!

>
>> but I don't see that as anything more than an
>> identification of how the cosmos works. We don't want to create
>> consciousness, we can do that already by reproducing. We want an
>> omnipotent glove for the hand of consciousness that we already have.
>> That seems easier to accomplish if we are not convincing ourselves
>> that feelings must be numbers.
>
> Comp explains completely why feelings are NOT numbers. You don't study
> the theory, and you criticize only your own prejudice about numbers
> and machines.
>
> You can use non-comp, as you seem to desire, but then tell us what is
> not Turing emulable in "organic matter"?
>
> Bruno
>

Craig, Bruno has a point there. Be sure that you are not arguing against
a straw man unintesionally!

Onward!

Stephen

Bruno Marchal

unread,
Jul 22, 2011, 5:27:04 AM7/22/11
to everyth...@googlegroups.com

Unless you believe in zombie, the point is that there *is* enough
phenomenological qualia and subjectivity, and contingencies, in the
realm of numbers. The diffrent 1-views (the phenomenology of mind, of
matter, etc.) are given by the modal variant of self-reference. This
has been done and this does explain the shape of modern physics (where
physicists are lost in a labyrinth of incompatible interpretations).
Most of the quantum weirdness are theorem in arithmetic.


>
> 3) the qualitative principle is identical to privacy in an ontological
> sense of being self-sequestering from public exterior access. The
> privacy itself is what defines the locus of qualitative phenomena.

OK.


>
> 4) this 'stuff' may be ultimately originating through non-local, a-
> temporal axiom of the Singularity,

?

> so that we may not only have
> restricted access by virtue of our own separation from each other, but
> qualia itself may somehow present the experience of entities which we
> would consider to be in the future as well as the past.

Non sense with comp. We just cannot *assume* things like past and
future.

That is their error. You don't need to copy them.


> I'm only taking a hard line on this because I think that it's in such
> contradistinction to the momentum of civilized thought. A sufficiently
> evolved card game could be pretty damn impressive, and if we invest
> our own feeling into it, there is arguably new feelings that we
> experience as a result, I just don't think that what we see as the
> game can have feelings that we can realize. We can't rule out though
> that anything we experience as having no feeling has a private
> dimension that may see us as having no feeling. It just gets a bit too
> psychedelic (salviadelic?) to actually implement that level of animism
> practically, don't you think?

Only persons can think.

>
>> I am OK with this. But I do think plausible that you can emulate
>> digitally first hand experiences of pain and pleasure. Then 'real'
>> human-like pain, which can last for a time, will need the whole
>> (arithmetical) truth to be stable on its many 'futures'.
>
> I think you can emulate first and experiences only in a system capable
> of subjectively experiencing them.

That is tautological. I agree of course. But the question is about the
nature of that system. You seem to want it described by physics. This
is logically OK, but you have to abandon comp. That's all.


> We certainly should be able emulate
> the output of some kind of pain or pleasure and input it into another
> nervous system. Simple record and playback through an analog or
> digital medium. That's really one of my earliest and strongest dreams
> would be to be involved in the orchestration of full sensory
> experiences, brain-direct.
>
>> Our first
>> person experiences are non computably distributed on an infinite
>> structure, but that is a consequence of its digitalness at some
>> level.
>
> There was a lot made of the perceived difference in digital music when
> CDs first came out, in the audiophile communities particularly. I do
> think that a subtle difference can be detected but hard to know
> whether it's the digital nature itself or the processing, mixing,
> playback equipment, confirmation bias, etc. Digital music seems
> harsher, more sibilant and shallow on the percussion. It doesn't
> bother me much, but I think there could be a legitimate, if subtle
> difference stemming from the pure conversion of analog waveforms to
> digital samples.

I am not convinced by argument of impossibility pointing on actual
technology.

Bruno

http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Jul 22, 2011, 6:10:08 AM7/22/11
to everyth...@googlegroups.com

On 22 Jul 2011, at 11:24, Stephen P. King wrote:

> Hi Bruno and Craig,
>
> On 7/22/2011 4:58 AM, Bruno Marchal wrote:
>>
>> On 21 Jul 2011, at 16:08, Craig Weinberg wrote:
>>
>>>> if you think molecules are needed, that is, that the level of
>>>> substitution includes molecular activity, this too can be
>>>> emulated by
>>>> a computer
>>>
>>> But it can only be emulated in a virtual environment interfacing
>>> with
>>> a computer literate human being though.
>>
>> Why. That's begging the question.
>>
>>
>
> Bruno has a strong point here. So long as one is dealing with a
> system that can be described such that that description can be
> turned into a recipe to represent all aspects of the system, then it
> is, by definition computable!
>>
>>> A real mouse will not be able
>>> to live on virtual cheese.
>>
>> But a virtual mouse will (I will talk *in* the comp theory).
>
> Virtual mice eat virtual cheese and get virtual calories from it!

And you can prove that virtual mice exists in arithmetic.

> Be careful that your not forcing a multi-leveled concept into a
> single conceptual level.

?

>>
>>
>>
>>> Why can't consciousness be considered
>>> exactly the same way, as an irreducible correlate of specific meta-
>>> meta-meta-elaborations of matter?
>>
>> What do you mean by matter? Primitive matter does not exist. A TOE
>> has to explain where the belief in matter comes from without
>> assuming it.
>>
>>
> OK, Bruno, would you stop saying that unless you explicitly explain
> what you mean by "primitive matter"?

The object of the ontological commitment of materialist or naturalist
or physicalist.
It is not assumed in comp, but its appearance is explained by the
competition amoong infiniie of universal numbers "acting" below the
substitution level (that is a consequence of already just UDA1-7).


> The point that "A TOE has to explain where the belief in matter
> comes from without assuming it" is very important, though, but you
> might agree that that kind of multi-leveled TOE is foreign to most
> people. Not many people consider that a Theory of Everything must
> contain not only a representation of waht is observed but also the
> means and methods of the observations there of, or else it is not a
> theory of *Everything*.


OK.

> This actually makes the concept of a TOE subject to Incompleteness
> considerations!

Assuming comp, OK.


>
>>
>>>
>>>> All what consciousness (and matter) needs is a sufficiently rich
>>>> collection of self-referential relations. It happens that the
>>>> numbers,
>>>> by the simple laws of addition and multiplication provides already
>>>> just that. Adding some ontological elements can only make the mind
>>>> body problem more complex to even just formulate.
>>>
>>> Information is not consciousness. Energy is the experience of being
>>> informed and informing, but it is not information.
>>
>> I agree.
>>
>>
> Indeed!
>
>>
>>
>>> This is why a brain
>>> must be alive and conscious (not in a coma) to be informed or
>>> inform,
>>> and why a computer must be turned on to execute programs, or a
>>> mechanical computing system has to have kinetic initialization, etc.
>>
>> Not at all. All you need are relative genuine relations. That does
>> explain both the origin of quanta and qualia, including the
>> difference of the quantitative and the qualitative.
>>
>
> But Bruno, you are side-stepping the vital question of persistance
> and transitivity in that notion of "genuine relations." One's TOE
> has to account for the appearance of time, even it it is not
> primitive.

That has been done for subjectime. It is a construct in the S4Grz1
modality, or the X1* modality. Is there a physical time? That is a
comp open problem. (as it is with most physicalist theory too).


> It is not enough to show that matter is not primitive, we have to
> show how the image of an evolving matter universe is possible.

The possibility is provides by the internal arithmetical hypostases.


> So far we are implying it via diamonds, but diamonds do not map in
> ways that are necessary to code interactions.

Not yet. If you can prove it cannot, then comp is refuted.

>
>>
>>> The path that energy takes determines the content of the
>>> experience to
>>> some extent, but it is the physical nature of the materials through
>>> which the continuous sense of interaction occurs which determine the
>>> quality or magnitude of possible qualitative elaboration (physical,
>>> chemo, bio, zoo-physio, neuro, cerebral) of that experience.
>>
>>
>> How?
>>
>>
> Umm, Craig, no. Energy is defined by the path of events of the
> interaction. This is why the word "action" is used. We have a notion
> of least action which relates to the minimum configuration of a
> system, the content of the experience *is* the "inside view" of the
> process that strives always for that minimum.


Careful. You reintroduce some physics here.


>
>>
>>
>>> Physical
>>> will take you to detection, chemo to sense, bio to feeling, zoo to
>>> emotion, neuro to cognition, cerebral to full abstraction
>>> (colloquial
>>> terms here, not asserting a formal taxonomy).
>>
>> You say so, but my point is that if you assume matter, your theory
>> needs very special form of infinities. Which one?
>>
>>
> Could explain this necessity, Bruno?

I recall that by the UD argument comp implies that matter does not
exist. So here I was giving the contrapositive. You can reintroduce
matter by negating comp. But such a matter need you, and your body,
being non Turing emeulable (if not comp is again assumed). That is why
a non comp theory of matter has to introduce special non Turing
emulable infinities (nor the first person infinities that we can
already justify by comp: they are also non Turing emulable, a priori).

>
>>
>>
>>> All are forms of
>>> awareness. Consciousness implies awareness of awareness
>>
>> That is self-consciousness.
>
> Consciousness does not require a model of self that is integrated
> into the content of consciousness, therefore consciousness is not
> reflexive in the primitive sense.

OK.

>
>>
>>
>>
>>> which maybe
>>> comes at the neuro or cerebral level, maybe lower? It has nothing
>>> to
>>> do with the complexity of the path that the energy takes. Complexity
>>> is an experience, not a discrete ontological condition.
>>
>> You need infinities to make complexity an experience, and that is
>> like putting the horse behind the car.
>>
>>
> Please explain this.

It is an allusion yo the same infinities as above. You need them to
have a notion of experience in any non-comp context, a fortiori for
the experience of complexity.

>
>>
>>>
>>>> Adding some ontological elements can only make the mind
>>>> body problem more complex to even just formulate.
>>>
>>> This makes me think that you are sentimental about protecting the
>>> simplicity of an abstract formula, rather than faithfully
>>> representing
>>> the problem.
>>
>> I was mentioning the mind-body problem. No formula was involved.
>> You put infinities and uncomputability everywhere, where comp put
>> it in very special place with complete justification.
>>
>>
>>
>>> I'm not especially interested in the 'easy' problem of
>>> consciousness.
>>
>> Me neither.
>>
>>
>>
>>> It's a worthwhile problem, to be sure, it's just not my
>>> thing. I do think, however, that if we can accurately describe the
>>> pattern of what the hard problem seems to arise from, it may have
>>> implications for both the easy and hard problems. At worst, my view
>>> limits the aspirations of inorganic materials to simulate
>>> consciousness,
>>
>> That is vitalism. It fails to explain anything. It makes the
>> problem less tractable. It is similar to the God of the gap. Comp
>> explains why there is a gap. I am not sure you study the theory.
>>
>>
> OTOH, Bruno. one cannot gloss over the way that quantum logic is non-
> distributive. Reducing all to combinators or numbers that do not
> involve this seems doomed from the start.

On the contrary, we have to explain the non distributivity from the
physics (observable) extracted from comp. But this has been done. The
quantum logic extracted from comp are non distributive (very plausibly).


> it is as if we dissolve everything into a soup and say: See,
> Existence is soup!

? (lol)

Bruno

>>> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en

>>> .
>>>
>>
>> http://iridia.ulb.ac.be/~marchal/
>>
>>
>>
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.

> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .

Craig Weinberg

unread,
Jul 22, 2011, 7:06:15 AM7/22/11
to Everything List
>Regardless of what the nerve cells experience individually, if it can't be
>communicated it to other nerve cells, it can't be talked about, thought
>about, or wondered about.

I think it could be shared between nerve cells, I'm saying it's not
shared with us. We are a political partition of a living organism. The
experiences which get kicked up to us are heavily filtered, but that
filtering can be modified. Everything that we know about how the
nervous system functions is based upon our assumptions that they are
not feeling anything, and that feeling is metaphysically manifested at
some point, somehow as an 'interpretation' or 'emergent property'.

My view is that since we know for a fact that we would not be able to
detect subjectivity outside of ourselves, and we know for a fact that
we have subjectivity, and that our nervous system is made of neurons,
and that we feel through our nervous system, there is absolutely no
reason to presume that the feelings we experience do not originate
from the feelings of neurons themselves, and not only through
neurological biochemistry. The biochemistry reflects the feelings,
sure, but they operate in completely different ways. The feelings have
much more latitude in how they are propagated and stored.

Craig
http://s33light.org

On Jul 21, 9:28 pm, Jason Resch <jasonre...@gmail.com> wrote:

Craig Weinberg

unread,
Jul 22, 2011, 7:16:43 AM7/22/11
to Everything List
>No doubt it would be technically difficult to make an artificial
>replacement for a neuron in a different substrate, but there is no
>theoretical reason why it could not be done, since there is no
>evidence for any magical processes inside neurons.

Subjectivity is the magic processes inside living neurons that is
unknown outside of that context. Life is the magic processes going on
through all cells and tissues that are unknown outside of organic
chemistry.

The argument is
>that IF an artificial neuron could be made which would replicate the
>behaviour of a biological neuron well enough to slot into position in
>a brain unnoticed THEN the consciousness of that brain would be
>unaffected. If not, a bizarre situation would arise where
>consciousness could change or disappear (eg., going blind) without the
>subject noticing. Can you address this particular point?

I have already addressed this point - you can have a living person
with a prosthetic limb but you can't replace a person's brain with a
prosthetic and have it still be that person. The limb only works
because there is enough of the body left to telegraph sensorimotive
action through/around the prosthetic obstacle. On one level, the more
neurons you replace, the more obstacles you introduce. If the living
cells are able to talk to each other well through the prosthetic
network, then functionality should be retained, but the experience of
the functionality I would expect to be truncated increasingly. The
living neurons will likely be able to compensate for quite a bit of
this loss, as it is likely massively fault tolerant and redundant, but
if you keep replacing the live cells with pegs, eventually I think
you're going to get decompensation, dementia, and catatonia or some
zombie like state which will likely be recognizable to other human
beings.

On Jul 21, 10:58 pm, Stathis Papaioannou <stath...@gmail.com> wrote:

Craig Weinberg

unread,
Jul 22, 2011, 8:35:45 AM7/22/11
to Everything List
>> But it can only be emulated in a virtual environment interfacing with
>> a computer literate human being though.
>
>Why. That's begging the question.

Are you suggesting that a virtual emulation of petroleum will someday
be usable in real world cars?

>But a virtual mouse will (I will talk *in* the comp theory).

Sure, but a virtual mouse could live on virtual granite and bleach too
if you programmed it that way. It's just a cartoon simulation of
molecules, not physical molecules.

>What do you mean by matter? Primitive matter does not exist. A TOE has
>to explain where the belief in matter comes from without assuming it.

Matter as experienced by persons on an ordinary mesocosmic level.
Matter comes from the singularity dividing it's substance with it's
absence, which is space. The experience of the division is
sensorimotive energy or feeling changes of the interior/trans-terior
of the singularity divided by it's absence which is time.

>> This is why a brain
>> must be alive and conscious (not in a coma) to be informed or inform,
>> and why a computer must be turned on to execute programs, or a
>> mechanical computing system has to have kinetic initialization, etc.
>
>Not at all. All you need are relative genuine relations. That does
>explain both the origin of quanta and qualia, including the difference
>of the quantitative and the qualitative.

At the moment of death, how do the relative genuine relations change
in a brain enough to justify permanent unconsciousness? How can we
turn on a computer without some electricity or mechanical chain
reaction?

>> The path that energy takes determines the content of the experience to
>> some extent, but it is the physical nature of the materials through
>> which the continuous sense of interaction occurs which determine the
>> quality or magnitude of possible qualitative elaboration (physical,
>> chemo, bio, zoo-physio, neuro, cerebral) of that experience.
>
>How?

In the same way that a living cell is a qualitatively different
gestalt than the sum of it's parts. It not only does things that the
molecules alone do not, but it feels things that they do not. Maybe it
feels less than the molecules? Maybe both. Maybe the richness of the
cellular qualia comes at the expense of condensing a range of spawning
sensorimotive micro-experiences from the atomic level? The difference
is that some atoms support molecules which support cells and some
don't. I have no reason to believe that molecules which do not support
cells should get the benefit of the doubt of being able to produce all
functions of cells, particular when those functions appear to us to be
significant to us. The difference between being alive and dead is
significant. There's nothing wrong with being dead if you're a stone,
but if stone was going to start growing and reproducing sexually, I
think it probably would have done so by now. Why does everything have
to be able to turn into a human mind?

>> Physical
>> will take you to detection, chemo to sense, bio to feeling, zoo to
>> emotion, neuro to cognition, cerebral to full abstraction (colloquial
>> terms here, not asserting a formal taxonomy).
>
>You say so, but my point is that if you assume matter, your theory
>needs very special form of infinities. Which one?

I assume the appearance of matter, and the appearance of different
levels of matter's extension. What infinities are required for that?

>> All are forms of
>> awareness. Consciousness implies awareness of awareness
>
>That is self-consciousness.

I think of self-consciousness as awareness of self-awareness, ie
neurotic feedback on the theme of consciousness. Awareness is 'there
is a flower'. Consciousness is 'I am looking at a flower'. Self
consciousness is 'Am I weird for looking at a flower?'. This is just
how I'm using these terms, not trying to say there is an objective
definition of loose linguistic concepts like 'consciousness'.

>> which maybe
>> comes at the neuro or cerebral level, maybe lower? It has nothing to
>> do with the complexity of the path that the energy takes. Complexity
>> is an experience, not a discrete ontological condition.
>
>You need infinities to make complexity an experience, and that is like
>putting the horse behind the car.

?

Complexities are an experience so whatever that requires must be the
case.

>I was mentioning the mind-body problem. No formula was involved. You
>put infinities and uncomputability everywhere, where comp put it in
>very special place with complete justification.

I'm just recognizing the half of the cosmos which computes more than
it is computable. The part that feels better when it takes a shower,
not because it satisfies a simulation's logic.

>> It's a worthwhile problem, to be sure, it's just not my
>> thing. I do think, however, that if we can accurately describe the
>> pattern of what the hard problem seems to arise from, it may have
>> implications for both the easy and hard problems. At worst, my view
>> limits the aspirations of inorganic materials to simulate
>> consciousness,
>
>That is vitalism. It fails to explain anything. It makes the problem
>less tractable. It is similar to the God of the gap. Comp explains why
>there is a gap. I am not sure you study the theory.

Comp explains feeling? First person subjectivity? Comp seems really
abstract to me. I don't see how it would explain the experience of a
two year old.

>Comp explains completely why feelings are NOT numbers. You don't study
>the theory, and you criticize only your own prejudice about numbers
>and machines.

Why not just tell me what comp says about why feelings aren't numbers?
I have studied the theory to an extent, but it doesn't make sense to
me after that. I don't know what it's referring to or why. It needs
concrete, commonsense examples for me to understand it.

>You can use non-comp, as you seem to desire, but then tell us what is
>not Turing emulable in "organic matter"?

The difference between how an organism feels and how an inorganic
compound feels is not emulable. It can only be imagined by or
experienced vicariously by another organism

Craig
http://s33light.org.

1Z

unread,
Jul 22, 2011, 10:00:34 AM7/22/11
to Everything List


On Jul 22, 12:06 pm, Craig Weinberg <whatsons...@gmail.com> wrote:
> >Regardless of what the nerve cells experience individually, if it can't be
> >communicated it to other nerve cells, it can't be talked about, thought
> >about, or wondered about.
>
> I think it could be shared between nerve cells, I'm saying it's not
> shared with us. We are a political partition of a living organism. The
> experiences which get kicked up to us are heavily filtered, but that
> filtering can be modified. Everything that we know about how the
> nervous system functions is based upon our assumptions that they are
> not feeling anything, and that feeling is metaphysically manifested at
> some point, somehow as an 'interpretation' or 'emergent property'.
>
> My view is that since we know for a fact that we would not be able to
> detect subjectivity outside of ourselves, and we know for a fact that
> we have subjectivity, and that our nervous system is made of neurons,
> and that we feel through our nervous system, there is absolutely no
> reason to presume that the feelings we experience do not originate
> from the feelings of neurons themselves,

Yes we do: the grain problem.

> and not only through
> neurological biochemistry. The biochemistry reflects the feelings,
> sure, but they operate in completely different ways. The feelings have
> much more latitude in how they are propagated and stored.
>
> Craighttp://s33light.org

Craig Weinberg

unread,
Jul 22, 2011, 10:49:33 AM7/22/11
to Everything List
>Bruno has a strong point here. So long as one is dealing with a system
>that can be described such that that description can be turned into a
>recipe to represent all aspects of the system, then it is, by definition
>computable!

The recipe is computable, (as is the menu, description, chemical
analysis), but the meal isn't. A recipe for virtual molecules isn't
sufficient to develop actual molecules that would be perceived as such
by microorganisms, other actual molecules, dogs, cats, etc. Only we
know how to access the simulation that we imagine resembles a
molecule. There is no objective quality of resemblance without a
subjective intepreter, there's just separate phenomena. One iron atom
has nothing do with another iron atom unless there is some perceiver
to recognize a common pattern. A does not equal A unless we perceive
pattern and similarity. These things are not a given. A cat doesn't do
A = A. Maybe >{tuna}< = >{tuna}<.

>>> The path that energy takes determines the content of the experience to
>>> some extent, but it is the physical nature of the materials through
>>> which the continuous sense of interaction occurs which determine the
>>> quality or magnitude of possible qualitative elaboration (physical,
>>> chemo, bio, zoo-physio, neuro, cerebral) of that experience.
>
>> How?
>
>Umm, Craig, no. Energy is defined by the path of events of the
>interaction. This is why the word "action" is used. We have a notion of
>least action which relates to the minimum configuration of a system, the
>content of the experience *is* the "inside view" of the process that
>strives always for that minimum.

What I'm saying though is that an animated sculpture of a cell made
from plaster is not a cell. Each plaster organelle and every plaster
cast of a chromosome wired up with finely articulated servo motors or
whatever - filled with microbeads of clear plastic or whatever... that
thing is never going to go through mitosis. It's not made of units
that know how to do that. Even if it's built to produce more plaster
and beads, to create more copies of itself (which would still be going
outside of the level on which the simulation would formally have to be
compared to be analogous to emulating feeling), it's not having an
experience of survival or sense, it's having an experience of plaster
and plastic. There may not be an absolutely objective difference
between a living cell and the molecules that compose it, but our
perception is that there is a significant difference, which only gets
more significant the further an embryo gets from a sand castle. There
is no point where a sand castle is so complex that it becomes capable
of meta-sand castelry. It won't ever come to life by itself, even if
it's the size of the Andromeda galaxy.

>it is as if we dissolve
>everything into a soup and say: See, Existence is soup!

Right, that's how I see my understanding of comp as well. If you
disqualify everything that isn't computable, then what you are left
with is computable.

>> Comp explains completely why feelings are NOT numbers. You don't study
>> the theory, and you criticize only your own prejudice about numbers
>> and machines.
>
>> You can use non-comp, as you seem to desire, but then tell us what is
>> not Turing emulable in "organic matter"?
>
>> Bruno
>
>Craig, Bruno has a point there. Be sure that you are not arguing against
>a straw man unintesionally!

Yeah, I would need to know how comp explains feelings exactly. I'm
just going by my observation that numbers are in many ways everything
that feeling is not. To get to the feeling of numbers, you have to
look at something like numerology.
> ...
>
> read more »

Craig Weinberg

unread,
Jul 22, 2011, 11:11:26 AM7/22/11
to Everything List
>Unless you believe in zombie, the point is that there *is* enough
>phenomenological qualia and subjectivity, and contingencies, in the
>realm of numbers. The diffrent 1-views (the phenomenology of mind, of
>matter, etc.) are given by the modal variant of self-reference. This
>has been done and this does explain the shape of modern physics (where
>physicists are lost in a labyrinth of incompatible interpretations).
>Most of the quantum weirdness are theorem in arithmetic.

I believe in zombies as far as it would be possible to simulate a
human presence with a YouTube flip book as I described, or a to
simulate a human brain digitally which would be zombies as far as
having any internal awareness beyond the semiconductor experience of
permittivity/permeability/wattage, etc.

>> so that we may not only have
>> restricted access by virtue of our own separation from each other, but
>> qualia itself may somehow present the experience of entities which we
>> would consider to be in the future as well as the past.
>
>Non sense with comp. We just cannot *assume* things like past and
>future.

I'm saying that we human beings consider them to be in the future and
the past, not that there is a future or past.

>That is their error. You don't need to copy them.

You think that asserting a hypothesis that feeling is not quantifiable
is the same thing as rationalizing genocide and slavery? I think it's
just the opposite. It's the belief in arithmetic over subjectivity
that is leading the planet down the primrose path to asphyxiation and
madness.

>Only persons can think.

I thought the point of comp was that digital simulation is sufficient
to simulate thought.

>That is tautological. I agree of course. But the question is about the
>nature of that system. You seem to want it described by physics. This
>is logically OK, but you have to abandon comp. That's all.

If comp cannot embrace physics, and physics cannot embrace comp, then
we have to turn to something which reconciles both.

>I am not convinced by argument of impossibility pointing on actual
>technology.

Not sure what you mean.

meekerdb

unread,
Jul 22, 2011, 2:52:33 PM7/22/11
to everyth...@googlegroups.com
On 7/22/2011 2:11 AM, Bruno Marchal wrote:
But what is a world? Also, assuming computationalism, you need only to believe that you interact with a "world/reality", whatever that is, like in dream. If not you *do* introduce some magic in both consciousness and world.


So I need to believe some magic or I have to introduce some magic.  That seems a distinction without a difference.

With comp the only magic is 0, 1, 2, 3, + addition + multiplication.

But is that the *only* magic.  It seems to me that your argument includes the magic of the UD.  If I understand it, it says that if a UD is running it executes all possible programs.  Among those programs are ones that are simulations of Everett's multiverse, such as we may inhabit, including the simulations of ourselves.  Consciousness is some part of the information processing in those simulations of us; where the the same conscious state is realized in many different programs and so has many different continuations and predecessors.

But all this is a hypothetical depending on a UD.  And aside from the problem that prima facie it will produce more chaotic non-lawlike experiences than law-like ones, there is no reason to suppose a UD exists.  This explanation of the world is very much like Boltzmann's brain.  It generates "everything" and then tries to pick out "this".

Brent

meekerdb

unread,
Jul 22, 2011, 3:19:46 PM7/22/11
to everyth...@googlegroups.com
On 7/22/2011 4:16 AM, Craig Weinberg wrote:
> I have already addressed this point - you can have a living person
> with a prosthetic limb but you can't replace a person's brain with a
> prosthetic and have it still be that person. The limb only works
> because there is enough of the body left to telegraph sensorimotive
> action through/around the prosthetic obstacle. On one level, the more
> neurons you replace, the more obstacles you introduce. If the living
> cells are able to talk to each other well through the prosthetic
> network, then functionality should be retained,

I think your theory is incoherent. If the neurons can "talk to each
other" thru the "pegs" then all the neurons except the afferent neurons
of perception and the efferent neurons of action could be replaced and
the person would *behave* exactly the same, including reporting that
they felt the same. They would be a philosophical zombie. They would
not *exhibit* dementia, catatonia, or any other symptom.

Brent

1Z

unread,
Jul 22, 2011, 4:41:48 PM7/22/11
to Everything List


On Jul 22, 3:49 pm, Craig Weinberg <whatsons...@gmail.com> wrote:
> There is no objective quality of resemblance without a
> subjective intepreter

says who?

Craig Weinberg

unread,
Jul 22, 2011, 5:55:56 PM7/22/11
to Everything List
I'm saying that if you kept randomly replaced neurons it would
eventually look like dementia or some other progressive brain wasting
disease. If it were possible to spare certain areas or categories of
neurons then I would expect more of a fragmented subject whose means
of expression are intact, but who may not know what they are about to
express. A partial zombie, being fed meaningless instructions but
carrying them out consciously, if involuntarily. Of course, there may
be all kinds of semantic dependencies which would render someone
comatose before it ever got that far. If i remove all vowels from my
writing there is a certain effect. If i remove all of the verbs there
is another, if i switch to 50% chinese it's different from going 50%
binary, etc. You would have to experiment to find out but i think the
success would hinge as much on reraining organic composition as
reproducing logical characteristics.

Craig Weinberg

unread,
Jul 22, 2011, 6:05:55 PM7/22/11
to Everything List
Are you positing a universal substance of resemblance? How does it
work?

If i see two mounds of dirt they might look the same to me, but maybe
they host two different ant colonies. Is the non-subjective
resemblance more like mine or the ants?

meekerdb

unread,
Jul 22, 2011, 6:25:45 PM7/22/11
to everyth...@googlegroups.com
On 7/22/2011 2:55 PM, Craig Weinberg wrote:
> I'm saying that if you kept randomly replaced neurons it would
> eventually look like dementia or some other progressive brain wasting
> disease.

But that's contradicting your assumption that the "pegs" are transparent
to the neural communication:

"If the living
cells are able to talk to each other well through the prosthetic
network, then functionality should be retained"

Whatever neurons remain, even it it's only the afferent/efferent
ones, they get exactly the same communication as if there were no "pegs"
and the whole brain was neurons.

> If it were possible to spare certain areas or categories of
> neurons then I would expect more of a fragmented subject whose means
> of expression are intact, but who may not know what they are about to
> express. A partial zombie, being fed meaningless instructions but
> carrying them out consciously, if involuntarily. Of course, there may
> be all kinds of semantic dependencies which would render someone
> comatose before it ever got that far. If i remove all vowels from my
> writing there is a certain effect. If i remove all of the verbs there
> is another, if i switch to 50% chinese it's different from going 50%
> binary, etc. You would have to experiment to find out but i think the
> success would hinge as much on reraining organic composition as
> reproducing logical characteristics.
>

You're evading the point by changing examples.

It does raise in my mind an interesting pont though. These questions
are usually considered in terms of replacing some part of the brain (a
neuron, or a set of neurons) by an artificial device that implements the
same input/output function. It then seems, absent some intellect
vitale, that the behavior of that brain/person would be unchanged. But
wouldn't it be likely that the person would suffer some slight
impairment in learning/memory simply because the artificial device
always computes the same function, whereas the biological neurons grow
and change in response to stimuli. And those stimuli are external and
cannot be forseen by the doctor. So what he needs to implant is not
just a fixed function but a function that depends on the history of its
inputs (i.e. a function with memory).

Brent

Bruno Marchal

unread,
Jul 22, 2011, 7:17:43 PM7/22/11
to everyth...@googlegroups.com
On 22 Jul 2011, at 20:52, meekerdb wrote:

On 7/22/2011 2:11 AM, Bruno Marchal wrote:
But what is a world? Also, assuming computationalism, you need only to believe that you interact with a "world/reality", whatever that is, like in dream. If not you *do* introduce some magic in both consciousness and world.


So I need to believe some magic or I have to introduce some magic.  That seems a distinction without a difference.

With comp the only magic is 0, 1, 2, 3, + addition + multiplication.

But is that the *only* magic.  It seems to me that your argument includes the magic of the UD.  If I understand it, it says that if a UD is running it executes all possible programs.  Among those programs are ones that are simulations of Everett's multiverse, such as we may inhabit, including the simulations of ourselves.  Consciousness is some part of the information processing in those simulations of us; where the the same conscious state is realized in many different programs and so has many different continuations and predecessors.

But all this is a hypothetical depending on a UD. 


The UD is a collection of number relations, and its existence is a theorem in elementary arithmetic. I did recall you this some hours ago, I think. There is nothing hypothetical in the existence of the UD. It is already part of the proofs in non Löbian universal machine. It is a computer scientist description of Sigma_1 truth. Even intuitionist have a UD. 

The theory of everything is less demanding than the UD argument which presuppose you are conscious, and there is some consensual reality, with doctor and brains, for example.
But the UDA should convince you that the TOE is just:

0 is different for s(x)
s(x) = s(y) -> x = y
x + 0 = x
x + s(y) = s(x + y)
x*0 = 0
x*s(y) = x * y + x

That's all. People who does not like number can take:
Kxy = x
Sxyz = xz(yz).

It is equivalent.



And aside from the problem that prima facie it will produce more chaotic non-lawlike experiences than law-like ones, there is no reason to suppose a UD exists.  This explanation of the world is very much like Boltzmann's brain.  It generates "everything" and then tries to pick out "this".


The UD exists independently of you like 777 is odd independently of you. Don't confuse the UD with the 'concrete UD' needs at step seven in the UDA. The universe does not need to "disappear"  for having most consequences already available,  and it does not need to be emulated at all by step 8. 


Bruno




Bruno Marchal

unread,
Jul 22, 2011, 7:26:39 PM7/22/11
to everyth...@googlegroups.com

Comp embraces the non computable. If you study the work you will
understand that both matter and mind arise from the non computable,
with comp.


>
>>> Comp explains completely why feelings are NOT numbers. You don't
>>> study
>>> the theory, and you criticize only your own prejudice about numbers
>>> and machines.
>>
>>> You can use non-comp, as you seem to desire, but then tell us what
>>> is
>>> not Turing emulable in "organic matter"?
>>
>>> Bruno
>>
>> Craig, Bruno has a point there. Be sure that you are not arguing
>> against
>> a straw man unintesionally!
>
> Yeah, I would need to know how comp explains feelings exactly.

See the second part of sane04. Ask question if there are problems.

> I'm
> just going by my observation that numbers are in many ways everything
> that feeling is not. To get to the feeling of numbers, you have to
> look at something like numerology.

I doubt that very much. Lol.
All you need is computer science. Actually all you need is addition
and multiplication (and working a little bit, well, a lot probably).

Bruno

> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .

Bruno Marchal

unread,
Jul 22, 2011, 8:40:59 PM7/22/11
to everyth...@googlegroups.com


That would just mean that the neuronal level is too much high for
being the substitution level. Better to chose the DNA and metabolic
level.

Bruno

Craig Weinberg

unread,
Jul 22, 2011, 9:35:22 PM7/22/11
to Everything List
On Jul 22, 6:25 pm, meekerdb <meeke...@verizon.net> wrote:

>But that's contradicting your assumption that the "pegs" are transparent
>to the neural communication:
>
>"If the living
>cells are able to talk to each other well through the prosthetic
>network, then functionality should be retained"

Neurological functionality is retained but there are fewer and fewer
actual neurons to comprise the network, so the content of the
conversations are degraded, even though that degradation is preserved
with high fidelity.

> Whatever neurons remain, even it it's only the afferent/efferent
>ones, they get exactly the same communication as if there were no "pegs"
>and the whole brain was neurons.

Think of them like sock puppet/bots multiplying in a closed social
network. If you have 100 actual friends on a social network and their
accounts are progressively replaced by emulated accounts posting even
slightly unconvincing status updates, you rapidly lose interest in
those updates and either route around them, focusing on the
diminishing group of your original non-bots, or check out of the
network altogether. A neuron is more than it's communication. A
communicating peg cannot communicate feelings that it doesn't have, it
can only emulate computations that are based upon feeling correlates.

>You're evading the point by changing examples.

Not intentionally. It's just that example is built on fundamental
assumptions which I think are not only untrue, but buried in the gap
between our understanding of consciousness and our understanding of
everything else. The assumption being that our consciousness must work
like everything else that our consciousness can examine objectively,
whereas my working assumption is to suppose that our consciousness
works in exactly the opposite way, and that opposition itself is
critically important and fundamental to any understanding of
consciousness. Observing our neurons behaviors is like chasing
billions of our tails, and assuming that their heads must be our head.
Replacing the tails alone doesn't make our head happen magically. The
neurons that we see are only the outer half of the neurons that we
are. The inside looks like our lives, our society, our evolution as
organisms.

>It does raise in my mind an interesting pont though. These questions
>are usually considered in terms of replacing some part of the brain (a
>neuron, or a set of neurons) by an artificial device that implements the
>same input/output function. It then seems, absent some intellect
>vitale, that the behavior of that brain/person would be unchanged. But
>wouldn't it be likely that the person would suffer some slight
>impairment in learning/memory simply because the artificial device
>always computes the same function, whereas the biological neurons grow
>and change in response to stimuli. And those stimuli are external and
>cannot be forseen by the doctor. So what he needs to implant is not
>just a fixed function but a function that depends on the history of its
>inputs (i.e. a function with memory).

Now you're getting closer to what I'm looking at. A flat model of a
neuron is not a neuron. It's a living thing. It has respiration. It
learns and grows. It's us.


Craig

Craig Weinberg

unread,
Jul 22, 2011, 9:48:38 PM7/22/11
to Everything List
On Jul 22, 7:26 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:

> Comp embraces the non computable. If you study the work you will  
> understand that both matter and mind arise from the non computable,  
> with comp.

> See the second part of sane04. Ask question if there are problems.

I know you must have gone over it too many times already in other
places, so I'm not expecting you to reiterate comp for me, but I
haven't been able to see how comp embraces the non computable. To me,
any time you say that comp explains something or direct me to your
work, it's the same as someone saying 'The Bible explains that'. Not
trying to disparage your way of teaching or motivating, just saying
that I can't seem to do anything with it. To me, if it can't be made
understandable within the context of the discussion at hand, it's
better left to another discussion.

> > I'm
> > just going by my observation that numbers are in many ways everything
> > that feeling is not. To get to the feeling of numbers, you have to
> > look at something like numerology.
>
> I doubt that very much. Lol.
> All you need is computer science. Actually all you need is addition
> and multiplication (and working a little bit, well, a lot probably).

What are your doubts based upon?

Craig
http://s33light.org

Craig Weinberg

unread,
Jul 22, 2011, 9:58:22 PM7/22/11
to Everything List
On Jul 22, 8:40 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:

> That would just mean that the neuronal level is too much high for  
> being the substitution level. Better to chose the DNA and metabolic  
> level.

Right. If you make tweaked real cells out of real atoms that are
arranged as an alternative to DNA, I think you'd have a good chance of
emulating an organism that is conscious. I don't think that you could
control it's behavior deterministically though, it would just be a
clone by another means. The question then becomes, why bother with the
synthetic DNA when natural DNA is already available.

If you're talking about emulating DNA in silicon, I think you still
come out with a convincing sculpture. A glass brain.

Craig

meekerdb

unread,
Jul 22, 2011, 10:18:45 PM7/22/11
to everyth...@googlegroups.com
On 7/22/2011 6:35 PM, Craig Weinberg wrote:
On Jul 22, 6:25 pm, meekerdb <meeke...@verizon.net> wrote:

  
But that's contradicting your assumption that the "pegs" are transparent
to the neural communication:

"If the living
cells are able to talk to each other well through the prosthetic
network, then functionality should be retained"
    
Neurological functionality is retained but there are fewer and fewer
actual neurons to comprise the network, so the content of the
conversations are degraded, even though that degradation is preserved
with high fidelity.
  

Well at least we've got the contradiction compressed down into one sentence: "Degradation is preserved with high fidelity."


  
Whatever neurons remain, even it it's only the afferent/efferent
ones, they get exactly the same communication as if there were no "pegs"
and the whole brain was neurons.
    
Think of them like sock puppet/bots multiplying in a closed social
network. If you have 100 actual friends on a social network and their
accounts are progressively replaced by emulated accounts posting even
slightly unconvincing status updates, you rapidly lose interest in
those updates and either route around them, focusing on the
diminishing group of your original non-bots, or check out of the
network altogether. A neuron is more than it's communication. 

Not to the next neuron it isn't...and not to the efferent neurons.  If there is something that isn't communicated, it can't make a difference to behavior because we know that muscles are moved by what the neurons communicate to them.
Or as Bruno suggests, just model it at a lower level.  Of course if you have to model it at the quark level, you might as well make your artificial neuron out of quarks and it won't be all that "artificial".

Brent

Craig Weinberg

unread,
Jul 22, 2011, 11:52:49 PM7/22/11
to Everything List
On Jul 22, 10:18 pm, meekerdb <meeke...@verizon.net> wrote:

> Well at least we've got the contradiction compressed down into one
> sentence: "Degradation is preserved with high fidelity."

Is it a contradiction to say that someone is having a bad conversation
over clear telephones?

> > ...A neuron is more than it's communication.
>
> Not to the next neuron it isn't...and not to the efferent neurons.  If
> there is something that isn't communicated, it can't make a difference
> to behavior because we know that muscles are moved by what the neurons
> communicate to them.

Muscles aren't moved by neurons, muscles move themselves in sympathy
with neuronal motivation. Behavior isn't everything, especially a
third person observation of a behavior on an entirely different scale
of physical activity.

> Or as Bruno suggests, just model it at a lower level.  Of course if you
> have to model it at the quark level, you might as well make your
> artificial neuron out of quarks and it won't be all that "artificial".

Exactly what I've been saying. If you model only the superficial
behaviors, you can't expect the meaningful roots of those behaviors to
appear spontaneously.

Craig Weinberg

unread,
Jul 23, 2011, 12:05:50 AM7/23/11
to Everything List
On Jul 22, 10:18 pm, meekerdb <meeke...@verizon.net> wrote:
>  Of course if you
> have to model it at the quark level, you might as well make your
> artificial neuron out of quarks and it won't be all that "artificial".

Actually, I think it would have to be a real quark (if quarks even
'exist'). The bottom line is that silicon is already made of
something. We can project our own sense and motives through silicon,
but whatever we project is only an exterior that faces our
observation. It's interior remains a silicon interior, unable to
precipitate a larger structure that has a biological spectrum of
feeling.

The behavior of a quark isn't mathematically inevitable in all
possible universes, it's math is forensically reverse engineered from
our observations. To simulate those observations doesn't bring the
unobservable interiority of the original into simulated existence.

Craig

meekerdb

unread,
Jul 23, 2011, 12:14:27 AM7/23/11
to everyth...@googlegroups.com
On 7/22/2011 8:52 PM, Craig Weinberg wrote:
> On Jul 22, 10:18 pm, meekerdb<meeke...@verizon.net> wrote:
>
>
>> Well at least we've got the contradiction compressed down into one
>> sentence: "Degradation is preserved with high fidelity."
>>
> Is it a contradiction to say that someone is having a bad conversation
> over clear telephones?
>

Where does the badness come from? The afferent neurons?

>
>>> ...A neuron is more than it's communication.
>>>
>> Not to the next neuron it isn't...and not to the efferent neurons. If
>> there is something that isn't communicated, it can't make a difference
>> to behavior because we know that muscles are moved by what the neurons
>> communicate to them.
>>
> Muscles aren't moved by neurons, muscles move themselves in sympathy
> with neuronal motivation. Behavior isn't everything, especially a
> third person observation of a behavior on an entirely different scale
> of physical activity.
>

But that's the crux of the argument. If behavior isn't everything then,
according to you, a person whose brain has been replaced by artificial,
but functionally identical elements, could be a philosophical zombie.
One who's every behavior is exactly like a person with a biological
brain - including reporting the same feelings. Yet that is contrary to
your assertion that they would exhibit dementia.

>
>> Or as Bruno suggests, just model it at a lower level. Of course if you
>> have to model it at the quark level, you might as well make your
>> artificial neuron out of quarks and it won't be all that "artificial".
>>
> Exactly what I've been saying. If you model only the superficial
> behaviors, you can't expect the meaningful roots of those behaviors to
> appear spontaneously.
>
>

No you've been saying more than that. You've been saying that even if
the artificial elements emulate the biological ones at a very low level
they won't work unless they *are* biological. When I said that if you
have to model at the quark level you might as well make up "real"
neurons that was a recommendation of efficiency. According to Bruno,
and functionalist theory, it might be very inefficient to emulate the
quarks with a Turing machine but it is in principle equally effective.

Brent

meekerdb

unread,
Jul 23, 2011, 12:21:12 AM7/23/11
to everyth...@googlegroups.com

"Forensically"?? Do we need a Weinberg-English dictionary?

Brent

Bruno Marchal

unread,
Jul 23, 2011, 5:41:39 AM7/23/11
to everyth...@googlegroups.com

On 23 Jul 2011, at 03:48, Craig Weinberg wrote:

> On Jul 22, 7:26 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
>
>> Comp embraces the non computable. If you study the work you will
>> understand that both matter and mind arise from the non computable,
>> with comp.
>
>> See the second part of sane04. Ask question if there are problems.
>
> I know you must have gone over it too many times already in other
> places, so I'm not expecting you to reiterate comp for me, but I
> haven't been able to see how comp embraces the non computable.

It embraces it at many places. First the first person indeterminacy
leads to the taking into account of uncomputable sequences in the
first person experiences. Just iterate the Washington-Moscow
experience n times. There will be 2^n resulting version of you, and
most will acknowledge the apparent non computability of their history
(like WWMMWWWWMWMMWMMMMWWW ...).

Secondly, at the modal first order level, none of the hypostases are
decidable. provable Bp is PI-2 complete, and true Bp is P1-complete in
the oracle of truth. This means "terribly non computable".

The theory of computability is full of result showing that the
behavior of machines is terribly NOT computable, and the machine's
theology is full of highly undecidable sentences. This should kill any
reductionist view of what numbers are capable of.

> To me, any time you say that comp explains something or direct me to
> your
> work, it's the same as someone saying 'The Bible explains that'.

I have worked a lot to make all this available to any good willing
people. The first six step of UDA in the sane04 people can be
understood without reading any textbook. Step seven needs familiarity
with the Church-Turing thesis, or with a bit of computer programming.
The AUDA "interview of the UM" needs some familiarity with Gödel's
1931 paper.
It should be obvious that computationalism needs of a bit of computer
science.

> Not
> trying to disparage your way of teaching or motivating, just saying
> that I can't seem to do anything with it.

You can remember the result, which is going in *you* direction (at
least UDA). We cannot have both comp and materialism. You keep
materialism, so you are coherent in abandoning comp. Unfortunately the
result is non intelligible, because you don't say explicitly what is
non Turing emulable in the human body.

> To me, if it can't be made
> understandable within the context of the discussion at hand, it's
> better left to another discussion.

Just tell us what you don't understand.

>
>>> I'm
>>> just going by my observation that numbers are in many ways
>>> everything
>>> that feeling is not. To get to the feeling of numbers, you have to
>>> look at something like numerology.
>>
>> I doubt that very much. Lol.
>> All you need is computer science. Actually all you need is addition
>> and multiplication (and working a little bit, well, a lot probably).
>
> What are your doubts based upon?

Numerology is poetry. It has nothing to tell on the consequences of
comp. To refer to numerology in that setting is like to ask an
astrologist for sending a rocket in space.

Bruno

http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Jul 23, 2011, 5:53:23 AM7/23/11
to everyth...@googlegroups.com

A sculpture (non moving, dead)? Or a zombie? (behavior is preserved)

In both case it makes DNA magical, infinite or non Turing emulable. It
makes also the theory of evolution doubtful, because it means that
nature has to take into account infinite information to select the
organisms. Biological evidences points on the contrary that nature bet
on approximations, redundancy, and allow a big range of perturbation
of its elements. Our material constitution changes all the time, and
allow contingent variations which would be hard to manage in case all
the decimals of the physical parameters have to be taken into account.

Bruno

http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Jul 23, 2011, 6:49:41 AM7/23/11
to everyth...@googlegroups.com

On 22 Jul 2011, at 17:11, Craig Weinberg wrote:

>> Unless you believe in zombie, the point is that there *is* enough
>> phenomenological qualia and subjectivity, and contingencies, in the
>> realm of numbers. The diffrent 1-views (the phenomenology of mind, of
>> matter, etc.) are given by the modal variant of self-reference. This
>> has been done and this does explain the shape of modern physics
>> (where
>> physicists are lost in a labyrinth of incompatible interpretations).
>> Most of the quantum weirdness are theorem in arithmetic.
>
> I believe in zombies as far as it would be possible to simulate a
> human presence with a YouTube flip book as I described, or a to
> simulate a human brain digitally which would be zombies as far as
> having any internal awareness beyond the semiconductor experience of
> permittivity/permeability/wattage, etc.


So they would behave like you and me, yet have consciousness of
permittivity/permeability/wattage?

>
>>> so that we may not only have
>>> restricted access by virtue of our own separation from each other,
>>> but
>>> qualia itself may somehow present the experience of entities which
>>> we
>>> would consider to be in the future as well as the past.
>>
>> Non sense with comp. We just cannot *assume* things like past and
>> future.
>
> I'm saying that we human beings consider them to be in the future and
> the past, not that there is a future or past.

I am not sure I understand what you mean.

>
>> That is their error. You don't need to copy them.
>
> You think that asserting a hypothesis that feeling is not quantifiable
> is the same thing as rationalizing genocide and slavery?

Not at all. Comp prevents feeling to be quantifiable. I am on your
side here.

> I think it's
> just the opposite. It's the belief in arithmetic over subjectivity
> that is leading the planet down the primrose path to asphyxiation and
> madness.

You are the one disallowing subjectivity to some entities, based on
their 'number skin'. You are the reductionist here, telling us that
only wet human brain can think.
On the contrary, mechanism, when well understood, is a vaccine against
reductionism, even against reductionism of robots and numbers.

>
>> Only persons can think.
>
> I thought the point of comp was that digital simulation is sufficient
> to simulate thought.

Thought/consciousness exists only in the arithmetical platonia.
Digital emulation makes them only relatively accessible to universal
machines, relatively to other universal machines. I know this is a bit
a subtle counter-intuitive point.

>
>> That is tautological. I agree of course. But the question is about
>> the
>> nature of that system. You seem to want it described by physics. This
>> is logically OK, but you have to abandon comp. That's all.
>
> If comp cannot embrace physics, and physics cannot embrace comp, then
> we have to turn to something which reconciles both.

Comp explains where the laws of physics come from, and this without
eliminating the person and souls.

>
>> I am not convinced by argument of impossibility pointing on actual
>> technology.
>
> Not sure what you mean.

You were arguing from the current and contingent shape of today's
technology.

Bruno

Craig Weinberg

unread,
Jul 23, 2011, 8:27:24 AM7/23/11
to Everything List
On Jul 23, 12:14 am, meekerdb <meeke...@verizon.net> wrote:
> On 7/22/2011 8:52 PM, Craig Weinberg wrote:
>
> Where does the badness come from?  The afferent neurons?

It comes from the diminishing number of real neurons participating in
the network, or, more likely, the unfavorable ration of neurons to
pegs.

> But that's the crux of the argument.  If behavior isn't everything then,
> according to you, a person whose brain has been replaced by artificial,
> but functionally identical elements, could be a philosophical zombie.  
> One who's every behavior is exactly like a person with a biological
> brain - including reporting the same feelings.  Yet that is contrary to
> your assertion that they would exhibit dementia.

The reason we won't get a philosophical zombie is that the premise
that an artificial simulation of a nervous system cell can be
functionally identical is faulty. Identical is identical. Artificial
is not. The degree to which the peg resembles the cell physically may
directly limit it's functional viability, because what we see of a
cell from the outside is only half of what the cell is. The other half
requires that we be the cell. We may not be able to be a non-cell at
all, even though from the outside it's function seems the same as
natural cells.

To set the equivalence between the natural and artificial neuron in
advance is to load the question. It assumes already that it is the
function of the brain to create consciousness through neurological
activity, whereas I think that the reality is neurological activity
and consciousness are both causes and symptoms of each other.
Imitating the neuron's behavior doesn't automatically invoke the
ability to imitate a neuron's awareness. It's the awareness of the
neurons themselves that is aggregated as our human consciousness, not
just the web of interactions between them.

> > Exactly what I've been saying. If you model only the superficial
> > behaviors, you can't expect the meaningful roots of those behaviors to
> > appear spontaneously.
>
> No you've been saying more than that.  You've been saying that even if
> the artificial elements emulate the biological ones at a very low level
> they won't work unless they *are* biological.  When I said that if you
> have to model at the quark level you might as well make up "real"
> neurons that was a recommendation of efficiency.  According to Bruno,
> and functionalist theory, it might be very inefficient to emulate the
> quarks with a Turing machine but it is in principle equally effective.

It's not that they have to *be* biological, it's that the simulation
has to use materials which can honor the biological level of
intelligence as well as the neurological. Silicon is already made of
something that behaves in a certain way. The strengths of that
material, it's reliable, semiconductive nature makes it ideally
transparent to project our own sensorimotive patterns through. That
quality is the very thing that prevents it from every being able to
host an unreliable, multivalent subjective entity. Posted about this
last night: The Glass Brain - http://s33light.org/post/7959078633

Craig

Craig Weinberg

unread,
Jul 23, 2011, 8:36:50 AM7/23/11
to Everything List

On Jul 23, 12:21 am, meekerdb <meeke...@verizon.net> wrote:

> "Forensically"??  Do we need a Weinberg-English dictionary?

I love forensically for this. It implies tracing a chain of cause
backwards, in a clinical, detached, bloodless way. With each step of
the regression, possibilities are narrowed down to fit the reality
that we think we know now - the corpus. I'm saying that the corpus of
our observation is not the observer. The math is derived from the
physics, which is inexplicably given, as well as the physics being
derived from math (which is inexplicably given as well, but it's logic
is so compelling that it seduces us into imagining otherwise). What
quarks do is not automatically what any imaginable hypothetical quark-
analog would do. We see atoms behave in a way that can be explained by
a quark model and then imagine that we can make a quark by exporting
that model with sufficient detail into a computer. The model is a
metaphor, it's not really something independent of our minds and
bodies.

1Z

unread,
Jul 23, 2011, 11:06:37 AM7/23/11
to Everything List


On Jul 22, 10:55 pm, Craig Weinberg <whatsons...@gmail.com> wrote:
> I'm saying that if you kept randomly replaced neurons it would
> eventually look like dementia or some other progressive brain wasting
> disease.


Functionally equivalent means functionally equivalence. You
are effectively saying that there is no such thing.

1Z

unread,
Jul 23, 2011, 11:11:46 AM7/23/11
to Everything List


On Jul 22, 11:05 pm, Craig Weinberg <whatsons...@gmail.com> wrote:
> Are you positing a universal substance of resemblance? How does it
> work?

No. I am proposing that things have properties, as an objective
fact,and that different things can have the same properties,
also as an objective fact.

> If i see two mounds of dirt they might look the same to me, but maybe
> they host two different ant colonies.

Then they resemble each other up to a point. That two things
resemble each other 90% is still objective.

1Z

unread,
Jul 23, 2011, 11:40:29 AM7/23/11
to Everything List


On Jul 23, 2:35 am, Craig Weinberg <whatsons...@gmail.com> wrote:
> On Jul 22, 6:25 pm, meekerdb <meeke...@verizon.net> wrote:
>
> >But that's contradicting your assumption that the "pegs" are transparent
> >to the neural communication:
>
> >"If the living
> >cells are able to talk to each other well through the prosthetic
> >network, then functionality should be retained"
>
> Neurological functionality is retained but there are fewer and fewer
> actual neurons to comprise the network, so the content of the
> conversations are degraded, even though that degradation is preserved
> with high fidelity.

Assuming replacement neurons aren;t functionally equivalent.

> > Whatever neurons remain, even it it's only the afferent/efferent
> >ones, they get exactly the same communication as if there were no "pegs"
> >and the whole brain was neurons.
>
> Think of them like sock puppet/bots multiplying in a closed social
> network. If you have 100 actual friends on a social network and their
> accounts are progressively replaced by emulated accounts posting even
> slightly unconvincing status updates,

Why would "slightly unconvincing" fall under "exact funcitonal
replacement"?

>you rapidly lose interest in
> those updates and either route around them, focusing on the
> diminishing group of your original non-bots, or check out of the
> network altogether. A neuron is more than it's communication. A
> communicating peg cannot communicate feelings that it doesn't have, it
> can only emulate computations that are based upon feeling correlates.
>
> >You're evading the point by changing examples.
>
> Not intentionally. It's just that example is built on fundamental
> assumptions which I think are not only untrue, but buried in the gap
> between our understanding of consciousness and our understanding of
> everything else.

IOW: yout think the Neurone Replacement Hypothesis doens't
disprove your theory because you think your theory is correct.
See the problem?

> The assumption being that our consciousness must work
> like everything else that our consciousness can examine objectively,
> whereas my working assumption is to suppose that our consciousness
> works in exactly the opposite way, and that opposition itself is
> critically important and fundamental to any understanding of
> consciousness. Observing our neurons behaviors is like chasing
> billions of our tails, and assuming that their heads must be our head.
> Replacing the tails alone doesn't make our head happen magically. The
> neurons that we see are only the outer half of the neurons that we
> are. The inside looks like our lives, our society, our evolution as
> organisms.
>
> >It does raise in my mind an interesting pont though.  These questions
> >are usually considered in terms of replacing some part of the brain (a
> >neuron, or a set of neurons) by an artificial device that implements the
> >same input/output function.  It then seems, absent some intellect
> >vitale, that the behavior of that brain/person would be unchanged.  But
> >wouldn't it be likely that the person would  suffer some slight
> >impairment in learning/memory simply because the artificial device
> >always computes the same function, whereas the biological neurons grow
> >and change in response to stimuli.

There is such a thing as machine learning.

1Z

unread,
Jul 23, 2011, 11:43:08 AM7/23/11
to Everything List


On Jul 23, 4:52 am, Craig Weinberg <whatsons...@gmail.com> wrote:

> Muscles aren't moved by neurons, muscles move themselves in sympathy
> with neuronal motivation.

Says who?

1Z

unread,
Jul 23, 2011, 12:02:53 PM7/23/11
to Everything List


On Jul 23, 1:27 pm, Craig Weinberg <whatsons...@gmail.com> wrote:
> On Jul 23, 12:14 am, meekerdb <meeke...@verizon.net> wrote:
>
> > On 7/22/2011 8:52 PM, Craig Weinberg wrote:
>
> > Where does the badness come from?  The afferent neurons?
>
> It comes from the diminishing number of real neurons participating in
> the network, or, more likely, the unfavorable ration of neurons to
> pegs.


Ie, the replacements are not functionally equivalent, even though
they are stipulated as being equivalent.

> > But that's the crux of the argument.  If behavior isn't everything then,
> > according to you, a person whose brain has been replaced by artificial,
> > but functionally identical elements, could be a philosophical zombie.  
> > One who's every behavior is exactly like a person with a biological
> > brain - including reporting the same feelings.  Yet that is contrary to
> > your assertion that they would exhibit dementia.
>
> The reason we won't get a philosophical zombie is that the premise
> that an artificial simulation of a nervous system cell can be
> functionally identical is faulty. Identical is identical. Artificial
> is not.

Indentical in all relevant aspects is good enough. That's a necessary
truth.
It might
be the case that all relevant aspects are all aspects (IOW.,holism is
true
and functionalism is false). That isnt a necessary truth either way.
It
needs to be argued on the basis of some sort of evidence.

> The degree to which the peg resembles the cell physically may
> directly limit it's functional viability, because what we see of a
> cell from the outside is only half of what the cell is. The other half
> requires that we be the cell. We may not be able to be a non-cell at
> all, even though from the outside it's function seems the same as
> natural cells.
>
> To set the equivalence between the natural and artificial neuron in
> advance is to load the question.

and vice versa.

> It assumes already that it is the
> function of the brain to create consciousness through neurological
> activity, whereas I think that the reality is neurological activity
> and consciousness are both causes and symptoms of each other.
> Imitating the neuron's behavior doesn't automatically invoke the
> ability to imitate a neuron's awareness. It's the awareness of the
> neurons themselves that is aggregated as our human consciousness, not
> just the web of interactions between them.
>
> > > Exactly what I've been saying. If you model only the superficial
> > > behaviors, you can't expect the meaningful roots of those behaviors to
> > > appear spontaneously.
>
> > No you've been saying more than that.  You've been saying that even if
> > the artificial elements emulate the biological ones at a very low level
> > they won't work unless they *are* biological.  When I said that if you
> > have to model at the quark level you might as well make up "real"
> > neurons that was a recommendation of efficiency.  According to Bruno,
> > and functionalist theory, it might be very inefficient to emulate the
> > quarks with a Turing machine but it is in principle equally effective.
>
> It's not that they have to *be* biological, it's that the simulation
> has to use materials which can honor the biological level of
> intelligence as well as the neurological.

Why? If what you have is a functional black
box ITFP, the it doens't mater what is inside
the black box.

Craig Weinberg

unread,
Jul 23, 2011, 12:05:26 PM7/23/11
to Everything List
On Jul 23, 5:41 am, Bruno Marchal <marc...@ulb.ac.be> wrote:

> It embraces it at many places. First the first person indeterminacy
> leads to the taking into account of uncomputable sequences in the
> first person experiences. Just iterate the Washington-Moscow
> experience n times. There will be 2^n resulting version of you, and
> most will acknowledge the apparent non computability of their history
> (like WWMMWWWWMWMMWMMMMWWW ...).

Ok, I've been able to parse the first six steps now, more or less. I
think that we are talking about two entirely different kinds of
uncomputability. I think yours is borne of untraceable variables in
the context of recursive hypercomplexity, while mine is ontologically
unquantifiable in it's elemental simplicity. Your view is concerned
with the logic of circumstantial process, where mine is the trans-
logical experience of the processors themselves.

I like your thought experiment, and it's very interesting, but I might
accuse it of limiting 1-p phenomena to a 3p report of 1-p
circumstance. The diary of the subject is really the subject you are
working with, the facts of the subject's relation to their geography,
etc rather than what it is to be able to feel like a person. In your
exercise, I would say that you could scan a person's body and upload
it into a swarm of nanobot printers to create your hyper-clones, who
would maybe have the same memories as the original but from the
instant they incarnate in a different city and or different time, they
begin to diverge. The Moscow h-c cloned from a Polish original would
have a different experience of course from the Washington DC h-c, and
I don't think we know enough about how memory and meaning function to
know whether there would be a quasi amnesiac depersonalization at
having been reborn as a prefabricated adult. Identity might be
disproportionately re-imprinted, as in a psychedelic response, to
being expressed through a perfect replica of another body in another
life.

I was thinking about how a sperm resembles a brain and spinal cord but
that the egg is more like a microcosm of a world. Conception plays out
metaphorically as a miniature sensorimotive self entering a single
life as a sphere which progressively articulates itself as it absorbs
not only the genetic information, but the informer as well.

> Secondly, at the modal first order level, none of the hypostases are
> decidable. provable Bp is PI-2 complete, and true Bp is P1-complete in
> the oracle of truth. This means "terribly non computable".
> The theory of computability is full of result showing that the
> behavior of machines is terribly NOT computable, and the machine's
> theology is full of highly undecidable sentences. This should kill any
> reductionist view of what numbers are capable of.

Sure, yeah. I can just look at

x = 0.999...
10x = 9.999...
10x - x = 9.999... - 0.999...
9x = 9
x = 1

and see that numbers aren't what they might appear to be. I'm talking
about something else though. I'm talking about numbers not being able
to feel anything, but that numbers and feelings arise out of each
other, and that the two phenomena represent opposite ends of an
involuted continuum.

> You can remember the result, which is going in *you* direction (at
> least UDA). We cannot have both comp and materialism.

I think that the fact we can even talk about comp or materialism is
evidence that we of course must have them both. Why does it have to be
one or the other? I can excite my body by thinking about something
exciting, and my brain can excite me by metabolizing a stimulant
molecule. Where's the conflict?

>You keep
> materialism, so you are coherent in abandoning comp. Unfortunately the
> result is non intelligible, because you don't say explicitly what is
> non Turing emulable in the human body.

What is non Turing emulable is the experience of the human life that
is associated with and through the body. The body alone is just a
cadaver that has been temporarily prevented from decay.

> Just tell us what you don't understand.

Step 7 for starters. But I get the gist I think. You're dealing with a
tokenized 3p view of 1p experience, which is great for your purposes.
I'm more about reconciling the full depths and breadth of 1p sentience
with physical phenomena and the cosmos in general.

Craig Weinberg

unread,
Jul 23, 2011, 12:23:31 PM7/23/11
to Everything List
On Jul 23, 5:53 am, Bruno Marchal <marc...@ulb.ac.be> wrote:

> A sculpture (non moving, dead)?  Or a zombie? (behavior is preserved)

I would not call it 'behavior' unless that is understood to exclude
agency. I'd just call it mechanism. A zombie also is both too somatic
and too necrotic a term. More like an automaton or a cartoon.

If you make a YouTube flip book, with an ELIZA type bot behind it, is
that a preservation of behavior? Behavior without any intention behind
it, without any motive sense of it's own, is a reflection of the
motive sense of the creator and the audience.

> In both case it makes DNA magical, infinite or non Turing emulable. It  
> makes also the theory of evolution doubtful, because it means that  
> nature has to take into account infinite information to select the  
> organisms. Biological evidences points on the contrary that nature bet  
> on approximations, redundancy, and allow a big range of perturbation  
> of its elements. Our material constitution changes all the time, and  
> allow contingent variations which would be hard to manage in case all  
> the decimals of the physical parameters have to be taken into account.

Right, I don't think that DNA is the only possible life-like
construct, it's just the one that happened to have happened. Like a
quantum experiment, there may be a kind of backward reaching wave
function collapse which limits the need for life-like elaborations
from cropping up from different areas of the periodic table. Not sure
that our conscious engineering of it can change that ruling, or
whether that recipe is local to a particular range of physical
circumstances...will the elements ever spontaneously drift into new
behaviors? I doubt it, but who knows. This area of guessing what life
could or could not be made of is way more speculative than my
hypothesis gets into. I'm mainly interested in the big picture of what
the cosmos actually is.

Craig
http://s33light.org

Craig Weinberg

unread,
Jul 23, 2011, 12:50:18 PM7/23/11
to Everything List
On Jul 23, 6:49 am, Bruno Marchal <marc...@ulb.ac.be> wrote:

> On 22 Jul 2011, at 17:11, Craig Weinberg wrote:
> > I believe in zombies as far as it would be possible to simulate a
> > human presence with a YouTube flip book as I described, or a to
> > simulate a human brain digitally which would be zombies as far as
> > having any internal awareness beyond the semiconductor experience of
> > permittivity/permeability/wattage, etc.
>
> So they would behave like you and me, yet have consciousness of  
> permittivity/permeability/wattage?

The overall presentation would respond in a human like way to the
outside human observer. They might not fool a cat or a baby.
Internally, yes, just one enormously complicated glassy molecule. A
megamolecule rather than a meta-meta-meta-molecule (cell-organism-
nervous system).

> > I'm saying that we human beings consider them to be in the future and
> > the past, not that there is a future or past.
>
> I am not sure I understand what you mean.

As living human beings, we are particpants in (our) cosmic Runtime.
Qualia is our access to exo-Runtime from within it. It's like our
lifeline to the infinite. I like to say 'If electricity is about
pushing waves through particles, then sense is about pulling wholes
through holes'. Holes being openings in the mask or mesh which
separates neuron from neuron, DNA from cytoplasm, eyeball from brain,
light source from retina, etc. Wholes being coherent meaningful
gestalts or experiences. The qualia themselves are experienced in
Runtime, but they are pieces of the firmament beyond Runtime, which is
the singularity, where causality and time-space emerge from. When I
see blue, I see all the blue that ever was, and perhaps all that ever
will be. Through the limited holes of my particular neurological
organism-within-an-organism capability of course.

> You are the one disallowing subjectivity to some entities, based on  
> their 'number skin'. You are the reductionist here, telling us that  
> only wet human brain can think.
> On the contrary, mechanism, when well understood, is a vaccine against  
> reductionism, even against reductionism of robots and numbers.

Oh, no, I just mean making an entity out of something other than us
will require more than a top level emulation of our external
behaviors. If you made a nanobot that could produce self replicating
molecules which self elaborated to cell, tissue, and organ
equivalents, then you may very well get a kind of life form with a
kind of consciousness, I just think that it will be a different kind
of consciousness from our own depending on how different the whole
physiology is. There may be some kind of anthropological experiential
residue at the somatic level though too. It's hard to say. I'm just
looking at the current technology and saying that you can't spin a
brain out of glass, you have to have a few major technological
breakthroughs in materials and synthetic biology first.

Even still, I don't know that it will do you much good because the
final product, if it is alive, will be just as hard to wrangle as any
other living organism. Silicon is too polite of a material to embody
the ferocity and volatility of life. It's an ontological oxymoron.

> Thought/consciousness exists only in the arithmetical platonia.  
> Digital emulation makes them only relatively accessible to universal  
> machines, relatively to other universal machines. I know this is a bit  
> a subtle counter-intuitive point.

What if thought/consciousness INsists through the physical universe
instead?

> Comp explains where the laws of physics come from, and this without  
> eliminating the person and souls.

Does it explain where comp comes from?

Craig
http://s33light.org

Craig Weinberg

unread,
Jul 23, 2011, 12:52:20 PM7/23/11
to Everything List
I'm saying that functional equivalence is directly proportional to
material equivalence as well as behavioral equivalence.

Evgenii Rudnyi

unread,
Jul 23, 2011, 1:00:15 PM7/23/11
to everyth...@googlegroups.com
On 23.07.2011 18:05 Craig Weinberg said the following:

> I was thinking about how a sperm resembles a brain and spinal cord
> but that the egg is more like a microcosm of a world. Conception
> plays out metaphorically as a miniature sensorimotive self entering a
> single life as a sphere which progressively articulates itself as it
> absorbs not only the genetic information, but the informer as well.

You might be interested in the statement by Dick

http://groups.google.com/group/everything-list/msg/76da2f473b3e9f96

"IF microtubules in the brain have coherence properties that equate to
consciousness
GIVEN that those microtubules map in the sense of a fate map from the
cortex of the one cell (amphibian) embryo to the brain
THEN we ought to be able to investigate those coherence properties
(consciousness?) in the one cell embryo."

If you would like to learn about embryogenesis more, then:

http://embryogenesisexplained.com/

The course will start again in the Second Life in October.

Evgenii

Craig Weinberg

unread,
Jul 23, 2011, 1:05:39 PM7/23/11
to Everything List
On Jul 23, 11:11 am, 1Z <peterdjo...@yahoo.com> wrote:
> On Jul 22, 11:05 pm, Craig Weinberg <whatsons...@gmail.com> wrote:
>
> > Are you positing a universal substance of resemblance? How does it
> > work?
>
> No. I am proposing that things have properties, as an objective
> fact,and that different things can have the same properties,
> also as an objective fact.

I don't think there is a such things as an objective property. If you
are the only thing in the universe, you have no properties. It is only
by relation to other things that properties can arise. I'm a human
sized thing, so walls do not have the property of being a possible
place for me to stand. If I'm a fly or ant sized thing, walls are
great places to park, and my water comes in handy spheres. Is water a
sphere or a formless fluid? Are spheres themselves spheres or are they
flat planes when you are small enough to stand on their surface?

> > If i see two mounds of dirt they might look the same to me, but maybe
> > they host two different ant colonies.
>
> Then they resemble each other up to a point. That two things
> resemble each other 90% is  still objective.

They don't resemble each other 90% to the ant. Does your home resemble
a stranger's home on the other side of the world? If you woke up
there, would you be able to even get through your day normally?

What if someone buried a 12 pound diamond under one mound and the
other one had an IED set to detonate 12 pounds of C4 on contact? You
can't tell the difference, but a bomb sniffing dog can. It's a
different universe depending on what you are. A cat is a pet to us, a
monster to a mouse. Our hair is a forest to a mite. There's no way for
us to find a universal common nature that makes one thing like
another, even if they seem to be the 'same' thing to us. A does not
equal A, except in our subjective awareness.

Craig

Craig

Craig Weinberg

unread,
Jul 23, 2011, 1:17:53 PM7/23/11
to Everything List
On Jul 23, 11:40 am, 1Z <peterdjo...@yahoo.com> wrote:
> On Jul 23, 2:35 am, Craig Weinberg <whatsons...@gmail.com> wrote:

> > Think of them like sock puppet/bots multiplying in a closed social
> > network. If you have 100 actual friends on a social network and their
> > accounts are progressively replaced by emulated accounts posting even
> > slightly unconvincing status updates,
>
> Why would "slightly unconvincing" fall under "exact funcitonal
> replacement"?

Because it's not possible for the emulation to simulate first person
participation forever from a third person design. First person
participants don't even know what they are going to say or do in a
given situation. The sense of what the thing is leaks through sooner
or later.

> IOW: yout think the Neurone Replacement Hypothesis doens't
> disprove your theory because you think your theory is correct.
> See the problem?

If my theory is correct, the Neuron Replacement Hypothesis is a Red
Herring. It's not a problem, it's a solution.

> There is  such a thing as machine learning.

Definitely. Inorganic mega-molecules can do amazing things. Enjoying a
steak dinner isn't one of them though.

Craig
http://s33light.org

Craig Weinberg

unread,
Jul 23, 2011, 1:22:56 PM7/23/11
to Everything List
That's my theory. It's not as if your neurons climb into your muscle
fibers and ride them like a donkey or secrete a bunch of molecules
that mechanically force their contraction like a vice. The nerves push
out your desire to move to your muscles, which sympathize with this
desire and contract themselves.

Craig Weinberg

unread,
Jul 23, 2011, 1:36:37 PM7/23/11
to Everything List
On Jul 23, 12:02 pm, 1Z <peterdjo...@yahoo.com> wrote:
> On Jul 23, 1:27 pm, Craig Weinberg <whatsons...@gmail.com> wrote:
>
> > On Jul 23, 12:14 am, meekerdb <meeke...@verizon.net> wrote:
>
> > > On 7/22/2011 8:52 PM, Craig Weinberg wrote:
>
> > > Where does the badness come from?  The afferent neurons?
>
> > It comes from the diminishing number of real neurons participating in
> > the network, or, more likely, the unfavorable ration of neurons to
> > pegs.
>
> Ie, the replacements are not functionally equivalent, even though
> they are stipulated as being equivalent.

No. You're equating the function of the network with the identity of
the participants. I can have an incoherent conversation over a crystal
clear phone system if I am trying to talk to people who are no longer
there, but have only voicemail. Even elaborate voicemail which
operates at the phonetic level to generate AI responses in any
language is not necessarily going to be able to answer my questions:
'Hey Freddie28283457701, did you get the glutamate I ordered yet?'
'Thank you for calling. Your call is important to us. Please stay on
the line'. 'Wow that really sounds just like you Freddie, now where is
the damn glutamate?'

> Indentical in all relevant aspects is good enough. That's a necessary
> truth.

It's not possible to know what the relevant aspects are. What are the
relevant aspects of yellow?

> It might
> be the case that all relevant aspects are all aspects (IOW.,holism is
> true
> and functionalism is false). That isnt  a necessary truth either way.
> It
> needs to be argued on the basis of some sort of evidence.

Not necessarily all aspects, but my hypothesis is that you need
material technologies to simulate more than the top level semantic i/
o. Water seems to be important in distinguishing that which can live
and that which cannot. I might start there.

> > To set the equivalence between the natural and artificial neuron in
> > advance is to load the question.
>
> and vice versa.

The burden of proof is on the hypothetical artificial neuron to prove
it's equivalent. The natural neuron doesn't have to prove that it's
nothing more than the artificial one since we know for a fact that our
entire world is somehow produced in the brain without any external
evidence whatsoever of that world.

> > It's not that they have to *be* biological, it's that the simulation
> > has to use materials which can honor the biological level of
> > intelligence as well as the neurological.
>
> Why? If what you have is a functional black
> box ITFP, the it doens't mater what is inside
> the black box.

It does if you ARE the black box.

Craig
http://s33light.org

Jason Resch

unread,
Jul 23, 2011, 2:04:01 PM7/23/11
to everyth...@googlegroups.com

And they apparently sympathize with the desires of electrons, as Galvani discovered with frog legs.

Jason

Jason Resch

unread,
Jul 23, 2011, 2:20:51 PM7/23/11
to everyth...@googlegroups.com


On Sat, Jul 23, 2011 at 12:17 PM, Craig Weinberg <whats...@gmail.com> wrote:


Definitely. Inorganic mega-molecules can do amazing things. Enjoying a
steak dinner isn't one of them though.



This is just racism.

Jason

1Z

unread,
Jul 23, 2011, 7:04:05 PM7/23/11
to Everything List


On Jul 23, 5:23 pm, Craig Weinberg <whatsons...@gmail.com> wrote:
> On Jul 23, 5:53 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
>
> > A sculpture (non moving, dead)?  Or a zombie? (behavior is preserved)
>
> I would not call it 'behavior' unless that is understood to exclude
> agency.

Does the presence or absence of agency make a visible difference?

Craig Weinberg

unread,
Jul 23, 2011, 7:05:01 PM7/23/11
to Everything List
On Jul 23, 2:04 pm, Jason Resch <jasonre...@gmail.com> wrote:

>And they apparently sympathize with the desires of electrons, as Galvani
>discovered with frog legs.

That's a good point. It's still the muscle tissue contracting itself
even though it's no longer part of a living frog. I wonder how dead
muscle tissue can be and still respond? I'm guessing that a cooked
frog leg doesn't do the dance, nor would a dried up mummified frog
leg. I wonder too how controllable the leg would be with raw
electrical impulse or whether it just flails.

But yeah, muscle tissue likes to party I guess. It doesn't care
whether the music comes from the brain or a galvanized scalpel. If it
were the same for brain tissue, I'm guessing that strokes would be
reversible with a fresh battery.

I think that electrons are a way of modeling the exterior behavior of
the sensorimotive nature of matter on the molecular level. I'm not
sure that they exist independently of groups of atoms, might be more
like a measure of how wound up an atom can be.

>This is just racism.
Haha. No inorganic semiconductor is going to eat in my steakhouse. We
don't serve their kind.

Craig

1Z

unread,
Jul 23, 2011, 7:06:45 PM7/23/11
to Everything List
There are robust counterexamples to that. I can relace an iron key
with a brasskey. The material
isn't important in that case. You need to argue points, not just
announce them.

1Z

unread,
Jul 23, 2011, 7:12:53 PM7/23/11
to Everything List


On Jul 23, 6:05 pm, Craig Weinberg <whatsons...@gmail.com> wrote:
> On Jul 23, 11:11 am, 1Z <peterdjo...@yahoo.com> wrote:
>
> > On Jul 22, 11:05 pm, Craig Weinberg <whatsons...@gmail.com> wrote:
>
> > > Are you positing a universal substance of resemblance? How does it
> > > work?
>
> > No. I am proposing that things have properties, as an objective
> > fact,and that different things can have the same properties,
> > also as an objective fact.
>
> I don't think there is a such things as an objective property. If you
> are the only thing in the universe, you have no properties.

You need to argue that, not just proclaim it.

> It is only
> by relation to other things that properties can arise. I'm a human
> sized thing, so walls do not have the property of being a possible
> place for me to stand. If I'm a fly or ant sized thing, walls are
> great places to park, and my water comes in handy spheres. Is water a
> sphere or a formless fluid? Are spheres themselves spheres or are they
> flat planes when you are small enough to stand on their surface?

They're still spheres.

The examples you give are of properties, or rather predicates
which are actually relations. However, relations can be obective
too. objective-subjective and intrinisic-relationa are orthogonal
axes.

> > > If i see two mounds of dirt they might look the same to me, but maybe
> > > they host two different ant colonies.
>
> > Then they resemble each other up to a point. That two things
> > resemble each other 90% is  still objective.
>
> They don't resemble each other 90% to the ant.

I suppose you mean it can;t perceive the resemblance.
But it they have 90% of their properties in common,
they resemble each other 90%. Objectively.

>Does your home resemble
> a stranger's home on the other side of the world? If you woke up
> there, would you be able to even get through your day normally?
>
> What if someone buried a 12 pound diamond under one mound and the
> other one had an IED set to detonate 12 pounds of C4 on contact? You
> can't tell the difference, but a bomb sniffing dog can. It's a
> different universe depending on what you are.

No. You perceive the same universe differently depending on
who you are.

1Z

unread,
Jul 23, 2011, 7:15:00 PM7/23/11
to Everything List


On Jul 23, 6:17 pm, Craig Weinberg <whatsons...@gmail.com> wrote:
> On Jul 23, 11:40 am, 1Z <peterdjo...@yahoo.com> wrote:
>
> > On Jul 23, 2:35 am, Craig Weinberg <whatsons...@gmail.com> wrote:
> > > Think of them like sock puppet/bots multiplying in a closed social
> > > network. If you have 100 actual friends on a social network and their
> > > accounts are progressively replaced by emulated accounts posting even
> > > slightly unconvincing status updates,
>
> > Why would "slightly unconvincing" fall under "exact funcitonal
> > replacement"?
>
> Because it's not possible for the emulation to simulate first person
> participation forever from a third person design.

Says who?

> First person
> participants don't even know what they are going to say or do in a
> given situation.

Maybe a brain scan would tell them. The *conscious* self is only
a small art.

>The sense of what the thing is leaks through sooner
> or later.
>
> > IOW: yout think the Neurone Replacement Hypothesis doens't
> > disprove your theory because you think your theory is correct.
> > See the problem?
>
> If my theory is correct, the Neuron Replacement Hypothesis is a Red
> Herring.

And vice versa.

> It's not a problem, it's a solution.
>
> > There is  such a thing as machine learning.
>
> Definitely. Inorganic mega-molecules can do amazing things. Enjoying a
> steak dinner isn't one of them though.

What have qualia to do with learning?

1Z

unread,
Jul 23, 2011, 7:17:15 PM7/23/11
to Everything List


On Jul 23, 6:22 pm, Craig Weinberg <whatsons...@gmail.com> wrote:
> On Jul 23, 11:43 am, 1Z <peterdjo...@yahoo.com> wrote:
>
> > On Jul 23, 4:52 am, Craig Weinberg <whatsons...@gmail.com> wrote:
>
> > > Muscles aren't moved by neurons, muscles move themselves in sympathy
> > > with neuronal motivation.
>
> > Says who?
>
> That's my theory.

Please contextualist it as such.

> It's not as if your neurons climb into your muscle
> fibers and ride them like a donkey or secrete a bunch of molecules
> that mechanically force their contraction like a vice. The nerves push
> out your desire to move to your muscles, which sympathize with this
> desire and contract themselves.

Bloody hell. And do paralyzing toxins make your muscles cool and
disinterested, or do they
make your nerves aloof and uncommunicative?
It is loading more messages.
0 new messages