Bruno's blasphemy.

14 views
Skip to first unread message

Constantine Pseudonymous

unread,
Jul 5, 2011, 11:14:23 PM7/5/11
to Everything List
Bruno assumes that consciousness preceded matter....

then why do we only find consciousness as a terrestrial phenomena
(suns and stars aren't conscious).. and as a later stage terrestrial
phenomena for that matter.... i.e. water, plants, minerals etc. are
not conscious..... and intellect and understanding in any real sense
are found in even later stage terrestrial forms, and we have physical
explanations for this.......

Bruno sins against naturalism and all that we know and intuit.

He will do anything to resurrect from the dead some rudimentary and
vague Mysticism.

Telmo Menezes

unread,
Jul 6, 2011, 6:19:57 AM7/6/11
to everyth...@googlegroups.com
> Bruno assumes that consciousness preceded matter....
>
> then why do we only find consciousness as a terrestrial phenomena
> (suns and stars aren't conscious).. and as a later stage terrestrial
> phenomena for that matter.... i.e. water, plants, minerals etc. are
> not conscious..... and intellect and understanding in any real sense
> are found in even later stage terrestrial forms, and we have physical
> explanations for this.......

I am unable to observe consciousness outside of myself. I just assume
it on entities that resemble me sufficiently. You might be conflating
the concept of consciousness with the concept of intelligence? We have
physical explanations for intelligence, not for consciousness.

> Bruno sins against naturalism and all that we know and intuit.
>
> He will do anything to resurrect from the dead some rudimentary and
> vague Mysticism.
>

> --
> You received this message because you are subscribed to the Google Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
>
>

Russell Standish

unread,
Jul 6, 2011, 7:44:55 AM7/6/11
to everyth...@googlegroups.com
Constantine, this is a rather trollish comment coming from an ignorant
position.

Let me put the following gedanken experiment - consider the
possibility that T. Rex might be either green or blue creatures, and
that either possibility is physically consistent with everything we
know about them. In a Multiverse (such as we consider here), we are in
a superposition of histories, which include both green and blue
T. Rexes.

Then one day, someone discovers an exquisitely fossilised T. Rex
feather, from which it is possible to determine the T. Rex's colour by
means of photonics. Let us say, that the colour was determined to be
green to everybody's satisfaction. But there is an alternate universe,
where the colour was determined to be blue. This universe has now
differentiated from our own, on the single fact of T. Rex colour.

The question is, when was the colour of the dinosaur established as a
fact? Many of us many worlders would argue it wasn't established
until the photonics measurement was made - there was no 'matter of
fact' about the dinosaur colour prior to that.

Generalising from this, it is quite plausible that suns and stars did
not exist prior to there being minds to perceive them. It is somewhat
disorienting to realise this possibility, ingrained as we are from
birth to believing in a directly perecived external reality. Yet the
reality we perceive is very definitely a construction of our minds - a
confabulation as it were, and there is not one scrap of evidence that
that reality exists independently of our minds.

BTW Bruno is not assuming that consciousnes preceded matter, he is
instead assuming that consciousness is the result of the running of
some computer program, as I'm sure he would tell you. The consequence
of that latter assumption is that perceived reality is just that - a
perception.

> --
> You received this message because you are subscribed to the Google Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.

--

----------------------------------------------------------------------------
Prof Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders
Visiting Professor of Mathematics hpc...@hpcoders.com.au
University of New South Wales http://www.hpcoders.com.au
----------------------------------------------------------------------------

B Soroud

unread,
Jul 6, 2011, 1:25:21 PM7/6/11
to everyth...@googlegroups.com
Russell: "Yet the

reality we perceive is very definitely a construction of our minds "

Why do you say such things? How can you know that?

IF this is true, then how did you get into the position to know this? How did you derive a true metanarrative from a "confabulation".

IF all that we know and perceive is false, how do we assume that idea is then uniquely and exclusively true?

I have heard that theory that the brain constructs our perception of reality, but I don't buy it... because I would ask.... how could we know that, it is contradictory.... they derive such a notion from a study of the reality (the brain etc.) that they say the "brain" "constructs".... they are just speculating from what seemingly makes sense to them....

"not one scrap of evidence that
that reality exists independently of our minds."

people die, all the time... they get burried and life on earth continues... the pyramids stay up... species propagate.... babies are born.... mozart is still played... and people still cognize these thoughts.


I don't think the choice is between a belief in some socalled physical reductionism or some noetic reductionism....

nor between an objectively existing reality or a hallucination or construction of reality via the brain (which itself is a hallucination or construction, no?) this makes no sense.

I think we simply don't know. agnosticism is best.

B Soroud

unread,
Jul 6, 2011, 1:28:20 PM7/6/11
to everyth...@googlegroups.com
anyways... I'm reconciled with you guys.... I'll try not to play nicer yet remain a critic.

p.s. I'm no mathematician, computer scientist, or physicist.... I was schooled in the humanities and avoided mathematics like the plague....... so I will need to ask you guys in the future to translate things into simple English.

I hope this is not necessarily like Plato's academy: "Let no one ignorant of mathematics enter here"

surely there must be a way to express your ideas in plain English.


B Soroud

unread,
Jul 6, 2011, 1:40:01 PM7/6/11
to everyth...@googlegroups.com
But if Bruno is saying that we only have third-person analysis and can't really account for the first-person perspective or origination or history/destiny.... that makes sense.

I believe a lot of people make the error in thinking that science understands how perception works, how vision and hearing work works... what the senses are.

neither science nor philosophy truly knows what the senses are... because that is where physical science meets cognitive science and we are utterly perplexed... I don't buy our reductionist and metaphorical descriptions one bit.....

the senses, perception, vision/sight, hearing/sound, mind.... all these and there interrelationships are mysteries.

we can't account for how (or if) the "first-person" derives from third-person descriptions/operations... creating a second that comes to know the first as such, as its basis.

we wish we knew that.
welcome to the desert of supreme ignorance.

meekerdb

unread,
Jul 6, 2011, 1:52:49 PM7/6/11
to everyth...@googlegroups.com
On 7/6/2011 4:44 AM, Russell Standish wrote:
> Constantine, this is a rather trollish comment coming from an ignorant
> position.
>
> Let me put the following gedanken experiment - consider the
> possibility that T. Rex might be either green or blue creatures, and
> that either possibility is physically consistent with everything we
> know about them. In a Multiverse (such as we consider here), we are in
> a superposition of histories, which include both green and blue
> T. Rexes.
>
> Then one day, someone discovers an exquisitely fossilised T. Rex
> feather, from which it is possible to determine the T. Rex's colour by
> means of photonics. Let us say, that the colour was determined to be
> green to everybody's satisfaction. But there is an alternate universe,
> where the colour was determined to be blue. This universe has now
> differentiated from our own, on the single fact of T. Rex colour.
>
> The question is, when was the colour of the dinosaur established as a
> fact? Many of us many worlders would argue it wasn't established
> until the photonics measurement was made - there was no 'matter of
> fact' about the dinosaur colour prior to that.
>

If the decoherence theory of how the classical arises from QM, the color
became a classical fact in our branch of the universe a very long time ago.

> Generalising from this, it is quite plausible that suns and stars did
> not exist prior to there being minds to perceive them. It is somewhat
> disorienting to realise this possibility, ingrained as we are from
> birth to believing in a directly perecived external reality. Yet the
> reality we perceive is very definitely a construction of our minds - a
> confabulation as it were, and there is not one scrap of evidence that
> that reality exists independently of our minds.
>
> BTW Bruno is not assuming that consciousnes preceded matter, he is
> instead assuming that consciousness is the result of the running of
> some computer program, as I'm sure he would tell you. The consequence
> of that latter assumption is that perceived reality is just that - a
> perception.
>

But it does seem a little presumptuous to suppose that the stars did not
exist before I (who's this "we"?) perceived them and yet claim that
arithmetic existed before anybody could count.

Brent

Terren Suydam

unread,
Jul 6, 2011, 2:07:14 PM7/6/11
to everyth...@googlegroups.com
FWIW, I think a smart guy like you can appreciate that some technical
competence is required to be able to truly criticize a technical idea.
I used to hang out on an artificial intelligence forum that was
plagued by a guy who insisted on critiquing AI every chance he got,
but he had never programmed a computer or had any competence of even
interest in computer science. I'll stop way short of saying technical
competence is required to engage in the dialogue - however, I think
some humility goes a long way if one doesn't have a grasp of the some
of the core ideas.

Best,
Terren

B Soroud

unread,
Jul 6, 2011, 2:28:26 PM7/6/11
to everyth...@googlegroups.com
Stars are a body..... our first-person experience is dependent on a body... since first there was stars... second there was body, allowing for first-person experience of stars.

There could be no first-person experience of stars prior to a human form.... There could be no first-person experience prior to form..... unless you believe in some spiritual gnosticism.

If you abstract all feeling and sensation and phenomena and forces from a "monadic consciousness"..... what you have is what can only be called "unconsciousness" and no technical first-person experience whatsoever... especially not in the self-conscious self-identifying rationally self-realized sense.

Form is necessary for first-person experience. Form is necessary for first-person moments. We know no other.

Do you not believe in evolution in some sense?

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscribe@googlegroups.com.

For more options, visit this group at http://groups.google.com/group/everything-list?hl=en.
   
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscribe@googlegroups.com.

B Soroud

unread,
Jul 6, 2011, 2:29:45 PM7/6/11
to everyth...@googlegroups.com
I would refer you to the Buddhistic notion of the negation of any ultimate monadic consciousness whatsoever.

Evgenii Rudnyi

unread,
Jul 6, 2011, 3:22:29 PM7/6/11
to everyth...@googlegroups.com
On 06.07.2011 05:14 Constantine Pseudonymous said the following:

If talk about consciousness, then I guess the next quote from Erwin
Schr�dinger should be appropriate

"The doctrine of identity can claim that it is clinched by the empirical
fact that consciousness is never experienced in the plural, only in the
singular. Not only has none of us ever experienced more than one
consciousness, but there is also no trace of circumstantial evidence of
this ever happening anywhere in the world."

What would you say to this? Does the famous physicist also plays Mysticism?

A bit more from Schr�dinger

http://blog.rudnyi.ru/2011/03/the-arithmetical-paradox-the-oneness-of-mind.html

Evgeny

meekerdb

unread,
Jul 6, 2011, 3:36:54 PM7/6/11
to everyth...@googlegroups.com
On 7/6/2011 12:22 PM, Evgenii Rudnyi wrote:
> On 06.07.2011 05:14 Constantine Pseudonymous said the following:
>> Bruno assumes that consciousness preceded matter....
>>
>> then why do we only find consciousness as a terrestrial phenomena
>> (suns and stars aren't conscious).. and as a later stage terrestrial
>> phenomena for that matter.... i.e. water, plants, minerals etc. are
>> not conscious..... and intellect and understanding in any real sense
>> are found in even later stage terrestrial forms, and we have
>> physical explanations for this.......
>>
>> Bruno sins against naturalism and all that we know and intuit.
>>
>> He will do anything to resurrect from the dead some rudimentary and
>> vague Mysticism.
>>
>
> If talk about consciousness, then I guess the next quote from Erwin
> Schr�dinger should be appropriate
>
> "The doctrine of identity can claim that it is clinched by the
> empirical fact that consciousness is never experienced in the plural,
> only in the singular. Not only has none of us ever experienced more
> than one consciousness, but there is also no trace of circumstantial
> evidence of this ever happening anywhere in the world."

Of course we infer the consciousness of others. To experience more than
one consciousness at the same time seems to defy the meaning of
consciousness. But Schrodinger may have just had in mind that
consciousness is always associated with only a singular body - unlike
the Borg in which a single mind has many bodies.

Brent

B Soroud

unread,
Jul 6, 2011, 3:50:01 PM7/6/11
to everyth...@googlegroups.com
actually the famous physicist famously does play mystic. very incoherently too.

are you trying to advance argument by authority i.e. "famous physicist believes in classical metaphysics therefore there must be something to it"?

On Wed, Jul 6, 2011 at 12:36 PM, meekerdb <meek...@verizon.net> wrote:
On 7/6/2011 12:22 PM, Evgenii Rudnyi wrote:
On 06.07.2011 05:14 Constantine Pseudonymous said the following:
Bruno assumes that consciousness preceded matter....

then why do we only find consciousness as a terrestrial phenomena
(suns and stars aren't conscious).. and as a later stage terrestrial
phenomena for that matter.... i.e. water, plants, minerals etc. are
not conscious..... and intellect and understanding in any real sense
are found in even later stage terrestrial forms, and we have
physical explanations for this.......

Bruno sins against naturalism and all that we know and intuit.

He will do anything to resurrect from the dead some rudimentary and
vague Mysticism.


If talk about consciousness, then I guess the next quote from Erwin Schrödinger should be appropriate


"The doctrine of identity can claim that it is clinched by the empirical fact that consciousness is never experienced in the plural, only in the singular. Not only has none of us ever experienced more than one consciousness, but there is also no trace of circumstantial evidence of this ever happening anywhere in the world."
Of course we infer the consciousness of others.  To experience more than one consciousness at the same time seems to defy the meaning of consciousness.  But Schrodinger may have just had in mind that consciousness is always associated with only a singular body - unlike the Borg in which a single mind has many bodies.

Brent

What would you say to this? Does the famous physicist also plays Mysticism?

Russell Standish

unread,
Jul 6, 2011, 6:35:33 PM7/6/11
to everyth...@googlegroups.com
On Wed, Jul 06, 2011 at 10:25:21AM -0700, B Soroud wrote:
> Russell: "Yet the
> reality we perceive is very definitely a construction of our minds "
>
> Why do you say such things? How can you know that?

Many people working in cognitive science seem to be in agreement on
this point. For a discussion, I would refer you to the book by Dan
Dennett ("Consiousness explained"), or the one I'm reading at the
moment (David Deutsch's "Beginning of Infinity"). I have a copy of
Steven Pinker's "How the Mind Works" - I can't tell you if it also
makes the same claim, as haven't had a chance to read it yet, but I'd
be surprised if it said something different.

>
> IF this is true, then how did you get into the position to know this? How
> did you derive a true metanarrative from a "confabulation".
>
> IF all that we know and perceive is false, how do we assume that idea is
> then uniquely and exclusively true?
>

Nobody is claiming that all we know and perceive is false. But it is a
confabulation - an interpretation of the sensory data stream based on
our already constructed theories and beliefs. The phenomena of false
memories is merely the starkest manifestations of this (in that case
the "knowledge" is false - quotes to pacify Bruno :).

... snip ...

> "not one scrap of evidence that
> that reality exists independently of our minds."
>
> people die, all the time... they get burried and life on earth continues...
> the pyramids stay up... species propagate.... babies are born.... mozart is
> still played... and people still cognize these thoughts.
>

Have you experienced death? Can you experience these other things you
talk of without your mind? That they exist independently of our
perceptions is just a theory. One that happens to be incompatible with
theory that our minds are computer programs.

>
> I don't think the choice is between a belief in some socalled physical
> reductionism or some noetic reductionism....
>

I wouldn't think so either :).

> nor between an objectively existing reality or a hallucination or
> construction of reality via the brain (which itself is a hallucination or
> construction, no?) this makes no sense.
>
> I think we simply don't know. agnosticism is best.

That is largely giving up. We can know some things.

Russell Standish

unread,
Jul 6, 2011, 6:44:24 PM7/6/11
to everyth...@googlegroups.com
On Wed, Jul 06, 2011 at 10:28:20AM -0700, B Soroud wrote:
> anyways... I'm reconciled with you guys.... I'll try not to play nicer yet
> remain a critic.
>
> p.s. I'm no mathematician, computer scientist, or physicist.... I was
> schooled in the humanities and avoided mathematics like the plague....... so
> I will need to ask you guys in the future to translate things into simple
> English.
>
> I hope this is not necessarily like Plato's academy: "Let no one ignorant of
> mathematics enter here"

No - it is not, but a willingness to grapple with the concepts of
mathematics is a requirement for understanding. Most people here are
happy to help someone with genuine inquiry achieve understanding.

I have collected what I think to be an essential toolkit of
mathematical concepts in Appendix A of my book "Theory of Nothing"
(which is available as a free download, or in hardcopy through Amazon).

>
> surely there must be a way to express your ideas in plain English.
>

It is not a question of the language (indeed mathematical notation
rarely appears in this forum due to the difficulty in expressing it in
plain ASCII), but of the concepts. Everyday English usage does not
have the necessary concepts to tackle this subject, and even
our present day mathematics barely does.

Russell Standish

unread,
Jul 6, 2011, 6:49:53 PM7/6/11
to everyth...@googlegroups.com
On Wed, Jul 06, 2011 at 10:52:49AM -0700, meekerdb wrote:
> >The question is, when was the colour of the dinosaur established as a
> >fact? Many of us many worlders would argue it wasn't established
> >until the photonics measurement was made - there was no 'matter of
> >fact' about the dinosaur colour prior to that.
>
> If the decoherence theory of how the classical arises from QM, the
> color became a classical fact in our branch of the universe a very
> long time ago.
>

I'm aware of this alternate formulation of QM, but think that has
problems of its own. This is why I tried to phrase the gedanken
experiment in terms of plain vanilla QM which has no decoherence. The
truth, I suspect, lies somewhere in the middle - ie a decoherence-like
effect will probably prove essential to stabilise the classical world.

Cheers

B Soroud

unread,
Jul 6, 2011, 9:24:40 PM7/6/11
to everyth...@googlegroups.com
Russell: "an interpretation of the sensory data stream based on

our already constructed theories and beliefs. "

To me the notion of "sensory data stream" is a interpretation of our bare and naive "perception"... based on -your- theories and beliefs....

I don't believe in the model that says the world is our brains model of the world... to me that clearly sounds absurd.

how could we know this?..... our brain is a model of itself?

our understanding of vision is still indefinitive. some people think we know it, but I don't.

meekerdb

unread,
Jul 7, 2011, 1:12:45 AM7/7/11
to everyth...@googlegroups.com

Can you explain that? It seems to be Bruno's central claim, but so far
as I can see he only tries to prove that a physical reality is otiose.

Brent

Russell Standish

unread,
Jul 7, 2011, 7:59:37 PM7/7/11
to everyth...@googlegroups.com
On Wed, Jul 06, 2011 at 10:12:45PM -0700, meekerdb wrote:
> >One that happens to be incompatible with
> >theory that our minds are computer programs.
>
> Can you explain that? It seems to be Bruno's central claim, but so
> far as I can see he only tries to prove that a physical reality is
> otiose.
>
> Brent

Here's my take on it. I guess you read the version I wrote 6 years ago
in ToN.

Once you allow the existence of a universal dovetailer, we are far
more likely to be running on the dovetailer (which is a simple
program) than on a much more complicated program (such as simulating
the universe as we currently see it). Under COMP, the dovetailer is
capable of generating all possible experiences (which is why it is
universal). Therefore, everything we call physics (electrons, quarks,
electromagnetic fields, etc) is phenomena caused by the running of the
dovetailer. By Church-Turing thesis, the dovetailer could be running
on anything capable of supporting universal computation. To use
Kantian terminology, what the dovetailer runs on is the noumenon,
unknowable reality, which need have no connection which the phenomenon
we observe. In fact with the CT-thesis, we cannot even know which
noumenon we're running on, in the case there may be more than one. We
might just as well be running on some demigod's child's playstation,
as running on Platonic arithmetic. It is in principle unknowable, even
by any putative omniscient God - there is simply no matter of fact
there to know.

So ultimately, this is why Bruno eliminates the concrete dovetailer,
in the manner of Laplace eliminating God "Sire, je n'ai besoin de cet
hypothese".

Anyway, Bruno will no doubt correct any mistaken conceptions here :).

Cheers

meekerdb

unread,
Jul 7, 2011, 8:35:05 PM7/7/11
to everyth...@googlegroups.com

That's what I thought he said. But I see no reason to suppose a UD is
running, much less running without physics. We don't know of any
computation that occurs immaterially. So I assumed I didn't understand
Bruno's argument correctly.

Brent

Craig Weinberg

unread,
Jul 8, 2011, 8:46:35 AM7/8/11
to Everything List
> That's what I thought he said.  But I see no reason to suppose a UD is
> running, much less running without physics.  We don't know of any
> computation that occurs immaterially.

All computation occurs materially and immaterially. An abacus doesn't
count itself. You ultimately have to have a conscious interpreter to
signify any particular text as quantitatively meaningful. Unplug all
monitors from all computers and what do you have left? Expensive
paperweights.

Why not just see perception as both local-solipsistic and generic-
universal? Isn't that exactly what it seems to be - a phenomena which
both seamlessly integrates psychological experience and physical
existence together in some contexts and clearly distinguishes between
them in others? If that's the case, then why not see that principle of
a meta-dualism which is a continuum between a dualism and two monisms
(each representing each other as the opposite of themselves) as the
principle governing all phenomena, all the way up and down the
macrocosm-mesocosm-microcosm.?

If you can't trust perception, then why do you suppose that you can
trust your perception that you can't trust perception?

If you can't trust physics then how do you explain the fact that
physical entities (bullets, psychoactive molecules) affect
consciousness but not the other way around?

If you trust both perception and physics then all you have to do is
identify the relationship between them as the most likely aspect to be
distorted by both perception and physics, and the most defining of our
subjective condition as a particular subjective phenomenon.

Yes, perception can be tricked and exposed as a limited neurological
phenomenon, however under most circumstances, our perception somehow
seems to do quite an admirable job of passing on to us precise
meanings and high quality information from both straightforward
physical sources and more mysterious and creative psychological
sources. The integrity of that information, as it passes through
countless neurological transductions - from optical-sonic correlations
to gestalt memory associations, is what perception is; not just the
final neurological rattlings, it's the whole thing. Sense is
universal. Not human sense of course. Not physical sense, and not
psychological sense, but the sense period, common and uncommon, is the
thread that binds it all together. Whether it's the string of String
theory, or a strand of DNA, or a string of alphanumeric characters, a
conversation thread, etc. it's all about pattern and sense.

Bruno Marchal

unread,
Jul 8, 2011, 12:09:57 PM7/8/11
to everyth...@googlegroups.com

On 08 Jul 2011, at 01:59, Russell Standish wrote:

> On Wed, Jul 06, 2011 at 10:12:45PM -0700, meekerdb wrote:
>>> One that happens to be incompatible with
>>> theory that our minds are computer programs.
>>
>> Can you explain that? It seems to be Bruno's central claim, but so
>> far as I can see he only tries to prove that a physical reality is
>> otiose.
>>
>> Brent
>
> Here's my take on it. I guess you read the version I wrote 6 years ago
> in ToN.
>
> Once you allow the existence of a universal dovetailer, we are far
> more likely to be running on the dovetailer (which is a simple
> program) than on a much more complicated program (such as simulating
> the universe as we currently see it).

I am in a good mood, so I will respect that. I don't want to go in the
"details". Let just mention that I am not sure the size of the UD code
matter so much. If we assume the *physical* existence of a forever
running UD, then what counts is the number of computational histories
going in my current state. That the UD itself wins might play a role.
But the way I isolate a computer science isolation of a formulation of
the mind-body, even what you say, if correct, has to be deduced from
the self-introspecting discourse of the machine.


> Under COMP, the dovetailer is
> capable of generating all possible experiences (which is why it is
> universal). Therefore, everything we call physics (electrons, quarks,
> electromagnetic fields, etc) is phenomena caused by the running of the
> dovetailer.

That's correct. Yet, I guess many people will suppose that this comes
from the fact that the UD will emulate some physical phenomenon, like
the computation of the heisenberg gigantic matrix describing the
observable evolution of the entire Milky Way + Magellan and Co. Now,
despite the UD does that indeed (trivially), that computation itself
is only playing an infinitesimal part in *our* experience of the
galaxy. A priori we have to take into account *all* computations going
through our actual 3-version of our actual mind state. So the real
physics, the one with the "real" quanta and the qualia, results from
the statistical interference of a priori a vastly bigger set of
computations.


> By Church-Turing thesis, the dovetailer could be running
> on anything capable of supporting universal computation. To use
> Kantian terminology, what the dovetailer runs on is the noumenon,
> unknowable reality, which need have no connection which the phenomenon
> we observe. In fact with the CT-thesis, we cannot even know which
> noumenon we're running on, in the case there may be more than one. We
> might just as well be running on some demigod's child's playstation,
> as running on Platonic arithmetic. It is in principle unknowable, even
> by any putative omniscient God - there is simply no matter of fact
> there to know.

All UDs are equivalent, and physics, nor the whole theology, can't
depend of the initial choice.
We can take elementary arithmetic, the combinators, or any Turing
complete formalism.
So we can even take the (rational, not real) Newton laws (but that
would be confusing!), or a rational topological computer (but that
would be treachery with respect to the "correct" extraction of the
consciousness/matter coupling from the introspecting universal machine
discourse.

>
> So ultimately, this is why Bruno eliminates the concrete dovetailer,
> in the manner of Laplace eliminating God "Sire, je n'ai besoin de cet
> hypothese".

No, it is much worst, it is more like "Sire, Your hypothesis
(primitive matter) can't be used, and might only prevents the finding
of the solution to the mind body problem.


>
> Anyway, Bruno will no doubt correct any mistaken conceptions here :).

The impulse is stronger than me :)

Bruno


http://iridia.ulb.ac.be/~marchal/

m.a.

unread,
Jul 8, 2011, 12:46:22 PM7/8/11
to everyth...@googlegroups.com
Dear Bruno,
Can you imagine any way to test whether a higher
intelligence is monitoring the UD and occasionally modifying it? marty
a.

Bruno Marchal

unread,
Jul 8, 2011, 12:53:42 PM7/8/11
to everyth...@googlegroups.com

I'm afraid this is not true. Some people even argue that computation
does not exist, the physical world only approximate them, according to
them.
I have not yet seen a physical definition of computation, except by
natural phenomenon emulating a mathematical computation. Computer and
computations have been discovered by mathematicians, and there many
equivalent definition of the concept, but only if we accept Church
thesis.

Now if you accept the idea that the propositions like "if x divides 4
then x divides 8", or "there is an infinity of twin primes" are true
or false independently of you, then arithmetical truth makes *all* the
propositions about all computations true or false independently of
you. The root of why it is so is Gödel arithmetization of the syntax
of arithmetic (or Principia). To be a piece of a computation is
arithmetical, even if intensional (can depend on the *existence* of
coding, but the coding is entirely arithmetical itself.

In short, I can prove to you that there is computations in elementary
arithmetical truth, but you have to speculate on many things to claim
that there are physical computations. Locally, typing on this
computer, makes me OK with the idea that the physical reality emulates
computations, and that makes the white rabbit problems even more
complex, but then we have not the choice, given the assumption.


> So I assumed I didn't understand Bruno's argument correctly.

You seem to have a difficulty to see that elementary arithmetic "run"
the UD, not in time and space, but in the arithmetical truth. Even the
tiny Robinson arithmetic proves all the propositions of the form it
exist i, j, s such that phi_i(j)^s is the s first step of the
computation of phi_i(j). And RA gives already all the proves, and so
already define a UD, which works is entirely made true by the
arithmetical reality, which I hope you can imagine as being not
dependent of us, the human, nor the alien, nor the Löbian machines
themselves (RA+ the inductions).

The arithmetization is not entirely obvious. It uses the Chinese
theorem on remainders, you need Bezout theorem, and all in all it is
like implementing a very high level programming languages in a very
low level "machine language", with very few instructions.
Matiyasevitch has deeply extended that result, by making it possible
to construct a creative set (a universal machine) as the set of non
negative integers of a degree four diophantine equation. This has the
consequence that you can verify the presence (but not necessarily the
absence) of *any* state in the UD (like the galactic state described
above) in less that 100 additions and multiplications. That is weird!
A degree 4 diophantine polynomial can emulate any arbitrary growing
functions from N to N, and even from Q to Q. So if you agree that a
natural numbers is solution or not, of a diophantine polynomial,
independently of you, then all digital computations are realized, or
not, independently of you, me, or the physical universe.

Bruno


http://iridia.ulb.ac.be/~marchal/

meekerdb

unread,
Jul 8, 2011, 2:44:34 PM7/8/11
to everyth...@googlegroups.com
On 7/8/2011 5:46 AM, Craig Weinberg wrote:
>> That's what I thought he said. But I see no reason to suppose a UD is
>> > running, much less running without physics. We don't know of any
>> > computation that occurs immaterially.
>>
> All computation occurs materially and immaterially. An abacus doesn't
> count itself. You ultimately have to have a conscious interpreter to
> signify any particular text as quantitatively meaningful. Unplug all
> monitors from all computers and what do you have left? Expensive
> paperweights.
>

But the question is what makes a conscious interpreter conscious.
Would replacing part of your brain by artificial circuits that are
computationally equivalent preserve your consciousness? Your example of
computers without monitors makes a good point, but one I think different
from your intention. Computation must have some meaning, at least
implicitly. Meaning is conferred by interaction with the world.
Computers with monitors interact rather narrowly via humans. But
consider a computer that runs the utilities in a hospital or flies an
airliner. They don't need humans to look at a screen to give meaning to
their computation.

Brent

Stephen P. King

unread,
Jul 8, 2011, 2:51:21 PM7/8/11
to everyth...@googlegroups.com
Hi Marty,

That cannot happen because the UD is by necessity all inclusive. To
be able to modify it there must exist extensions of the UD that are not
being run in the UD but could be run in the UD. Since the UD is running
all possible strings there are no alternatives that one can chose from
to establish a test.

Onward!

Stephen

Stephen P. King

unread,
Jul 8, 2011, 2:59:38 PM7/8/11
to everyth...@googlegroups.com
Hi Brent,

    I found the papers of Marius Buliga (For example: http://arxiv.org/abs/1103.6007 )offer an interesting solution to this problem! The point is that a model or map of a computer becomes the territory of a model of the original model, so we break the map vs. territory dichotomy. This bypasses the substitution question completely, I think.

Onward!

Stephen

Stephen P. King

unread,
Jul 8, 2011, 3:00:51 PM7/8/11
to everyth...@googlegroups.com

Evgenii Rudnyi

unread,
Jul 8, 2011, 3:47:14 PM7/8/11
to everyth...@googlegroups.com
On 06.07.2011 21:36 meekerdb said the following:

> On 7/6/2011 12:22 PM, Evgenii Rudnyi wrote:
>> On 06.07.2011 05:14 Constantine Pseudonymous said the following:
>>> Bruno assumes that consciousness preceded matter....

...


>>
>> If talk about consciousness, then I guess the next quote from Erwin
>> Schr�dinger should be appropriate
>>
>> "The doctrine of identity can claim that it is clinched by the
>> empirical fact that consciousness is never experienced in the
>> plural, only in the singular. Not only has none of us ever
>> experienced more than one consciousness, but there is also no trace
>> of circumstantial evidence of this ever happening anywhere in the
>> world."
>
> Of course we infer the consciousness of others. To experience more
> than one consciousness at the same time seems to defy the meaning of
> consciousness. But Schrodinger may have just had in mind that
> consciousness is always associated with only a singular body - unlike
> the Borg in which a single mind has many bodies.
>
> Brent

I do not know actually what Schroedinger wanted to say there, I have to
read him again. Let me quote the last paragraph from that chapter
Oneness of Mind:

"Let me briefly mention the notorious atheism of science which comes, of
course, under the same heading. Science has to suffer this reproach
again and again, but unjustly so. No personal god can form part of world
model that has only become accessible at the cost of removing everything
personal from it. We know, when God is experienced, this is an event as
real as an immediate sense perception or as one's own personality. Like
them he must be missing in the space-time picture. I do not find God
anywhere in space and time - that is what the honest naturalist tells
you. For this he incurs blame from him in whose catechism is written:
God is spirit."

Evgenii
http://blog.rudnyi.ru

Rex Allen

unread,
Jul 8, 2011, 4:06:34 PM7/8/11
to everyth...@googlegroups.com
Another good Schrodinger quote:

"Scientific theories serve to facilitate the survey of our observations and experimental findings. Every scientist knows how difficult it is to remember a moderately extended group of facts, before at least some primitive theoretical picture about them has been shaped. It is therefore small wonder, and by no means to be blamed on the authors of original papers or of text-books, that after a reasonably coherent theory has been formed, they do not describe the bare facts they have found or wish to convey to the reader, but clothe them in the terminology of that theory or theories. This procedure, while very useful for our remembering the facts in a well-ordered pattern, tends to obliterate the distinction between the actual observations and the theory arisen from them. And since the former always are of some sensual quality, theories are easily thought to account for sensual qualities; which, of course, they never do."



In a similar vein, Democritus’s imagined conversation between the
intellect and the senses:

“Intellect: ‘Color is by convention, sweet by convention, bitter by convention; in truth there are but atoms and the void.’

Senses: ‘Wretched mind, from us you are taking the evidence by which you would overthrow us?  Your victory is your own fall.’”


Rex




On Fri, Jul 8, 2011 at 3:47 PM, Evgenii Rudnyi <use...@rudnyi.ru> wrote:
> On 06.07.2011 21:36 meekerdb said the following:
>>
>> On 7/6/2011 12:22 PM, Evgenii Rudnyi wrote:
>>>
>>> On 06.07.2011 05:14 Constantine Pseudonymous said the following:
>>>>
>>>> Bruno assumes that consciousness preceded matter....
>
> ...
>>>
>>> If talk about consciousness, then I guess the next quote from Erwin
>>>  Schrödinger should be appropriate

Evgenii Rudnyi

unread,
Jul 8, 2011, 4:09:43 PM7/8/11
to everyth...@googlegroups.com
On 06.07.2011 21:50 B Soroud said the following:

> actually the famous physicist famously does play mystic. very
> incoherently too.
>
> are you trying to advance argument by authority i.e. "famous
> physicist believes in classical metaphysics therefore there must be
> something to it"?

Well, my question to you was ill-formed. Please ignore it.

In that quote I like the observation that I experience only my
consciousness. In general, I like starting with what other people saying
about a problem. Along this way it seems make sense to start with famous
people. It does not necessary mean that they are right. It was after all
just a quote.

By the way, I have just sent another quote from Schroedinger that shows
that his position probably could be considered as some mysticism. An
interesting questions why.

Well, if to speak about mysticism, in my collection there is a link to
John Hagelin, see for example

http://worldpeaceendowment.org/invincibility/invincibility8.html

You may want to compare Schroedinger with him.

Evgenii
http://blog.rudnyi.ru

> On Wed, Jul 6, 2011 at 12:36 PM, meekerdb<meek...@verizon.net>
> wrote:
>
>> On 7/6/2011 12:22 PM, Evgenii Rudnyi wrote:
>>
>>> On 06.07.2011 05:14 Constantine Pseudonymous said the following:
>>>
>>>> Bruno assumes that consciousness preceded matter....
>>>>
>>>> then why do we only find consciousness as a terrestrial
>>>> phenomena (suns and stars aren't conscious).. and as a later
>>>> stage terrestrial phenomena for that matter.... i.e. water,
>>>> plants, minerals etc. are not conscious..... and intellect and
>>>> understanding in any real sense are found in even later stage
>>>> terrestrial forms, and we have physical explanations for
>>>> this.......
>>>>
>>>> Bruno sins against naturalism and all that we know and intuit.
>>>>
>>>> He will do anything to resurrect from the dead some rudimentary
>>>> and vague Mysticism.
>>>>
>>>>
>>> If talk about consciousness, then I guess the next quote from

>>> Erwin Schr�dinger should be appropriate


>>>
>>> "The doctrine of identity can claim that it is clinched by the
>>> empirical fact that consciousness is never experienced in the
>>> plural, only in the singular. Not only has none of us ever
>>> experienced more than one consciousness, but there is also no
>>> trace of circumstantial evidence of this ever happening anywhere
>>> in the world."
>>>
>>
>> Of course we infer the consciousness of others. To experience more
>> than one consciousness at the same time seems to defy the meaning
>> of consciousness. But Schrodinger may have just had in mind that
>> consciousness is always associated with only a singular body -
>> unlike the Borg in which a single mind has many bodies.
>>
>> Brent
>>
>>
>>
>>> What would you say to this? Does the famous physicist also plays
>>> Mysticism?
>>>

>>> A bit more from Schr�dinger
>>>
>>> http://blog.rudnyi.ru/2011/03/**the-arithmetical-paradox-the-**
>>> oneness-of-mind.html<http://blog.rudnyi.ru/2011/03/the-arithmetical-paradox-the-oneness-of-mind.html>


>>>
>>>
>>>
Evgeny
>>>
>>>
>> -- You received this message because you are subscribed to the
>> Google Groups "Everything List" group. To post to this group, send
>> email to

>> everything-list@googlegroups.**com<everyth...@googlegroups.com>


>>
>>
.
>> To unsubscribe from this group, send email to
>> everything-list+unsubscribe@

>> **googlegroups.com<everything-list%2Bunsu...@googlegroups.com>.
>>
>>
For more options, visit this group at http://groups.google.com/**
>> group/everything-list?hl=en<http://groups.google.com/group/everything-list?hl=en>
>>
>>
.
>>
>>
>

Bruno Marchal

unread,
Jul 8, 2011, 5:23:30 PM7/8/11
to everyth...@googlegroups.com

On 08 Jul 2011, at 14:46, Craig Weinberg wrote:

>> That's what I thought he said. But I see no reason to suppose a UD
>> is
>> running, much less running without physics. We don't know of any
>> computation that occurs immaterially.
>
> All computation occurs materially and immaterially. An abacus doesn't
> count itself. You ultimately have to have a conscious interpreter to
> signify any particular text as quantitatively meaningful.

The idea here is that a universal intepreter (and I think abacus does
that job) is enough. And then to reason.
You assumptions are not enough clear so I never know if you talk of
what is or of what seems to be.

> Unplug all
> monitors from all computers and what do you have left? Expensive
> paperweights.
>
> Why not just see perception as both local-solipsistic and generic-
> universal?

I think Rex has defend such a view. It does not satisfy me. you start
from the mystery. I limit the mystery to the numbers through the
notion of machines and self-reference.

> Isn't that exactly what it seems to be -

Well, but that is not an argument for a platonist. If it seems like
this, it is certainly not this. You do describe; perhaps correctly, a
first person experience. The problem is to relate them to third person
sharable notions.

> a phenomena which
> both seamlessly integrates psychological experience and physical
> existence together in some contexts and clearly distinguishes between
> them in others? If that's the case, then why not see that principle of
> a meta-dualism which is a continuum between a dualism and two monisms
> (each representing each other as the opposite of themselves) as the
> principle governing all phenomena, all the way up and down the
> macrocosm-mesocosm-microcosm.?
>
> If you can't trust perception, then why do you suppose that you can
> trust your perception that you can't trust perception?

That is a nice argument, but it shows that we cannot doubt
consciousness. We can still doubt all the content of consciousness,
except this one.
This does not force us to start from that concept, except by accepting
its existence, and that it has to be explained. If a part remains not
explainable, then it would be nice to have a meta-explanation for
that. (and this happens with the logic of self-reference)

>
> If you can't trust physics then how do you explain the fact that
> physical entities (bullets, psychoactive molecules) affect
> consciousness but not the other way around?

Consciousness content, like fear, can modify the matter distribution
around. At a deeper level, we select the realities which support us
since a long time (deep computation).


>
> If you trust both perception and physics

But that is exactly what we should not trust too much, and especially
not take literally.


> then all you have to do is
> identify the relationship between them as the most likely aspect to be
> distorted by both perception and physics, and the most defining of our
> subjective condition as a particular subjective phenomenon.

I think you are bringing some identity thesis, which might force you
to bring infinities in the picture to make it coherent. But You are
not precise enough to make it appears.

Bruno


>
> Yes, perception can be tricked and exposed as a limited neurological
> phenomenon, however under most circumstances, our perception somehow
> seems to do quite an admirable job of passing on to us precise
> meanings and high quality information from both straightforward
> physical sources and more mysterious and creative psychological
> sources. The integrity of that information, as it passes through
> countless neurological transductions - from optical-sonic correlations
> to gestalt memory associations, is what perception is; not just the
> final neurological rattlings, it's the whole thing. Sense is
> universal. Not human sense of course. Not physical sense, and not
> psychological sense, but the sense period, common and uncommon, is the
> thread that binds it all together. Whether it's the string of String
> theory, or a strand of DNA, or a string of alphanumeric characters, a
> conversation thread, etc. it's all about pattern and sense.
>

> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.

> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/

Craig Weinberg

unread,
Jul 8, 2011, 9:40:15 PM7/8/11
to Everything List
Conscious is an informal term, so it depends how you want to use it. I
think of consciousness as the top level meta-awareness of a hierarchy
of levels which might be called awareness, perception, sensation, and
detection, where another person's idea of consciousness would equate
all of those terms. In my usage, the awareness that one is aware,
which would include something like dreaming or hallucinating. A more
medical use of the word might distinguish consciousness as the ability
to respond to external stimuli or produce electrical activity in
particular areas of the brain, etc.

Replacing parts of the brain depends what the artificial circuits are
made of. For them to be experienced as something like human
consciousness then I think they would have to be made of biological
tissue. Awareness isn't calculation, 'information', or
'interpretations'. Those are high-level cognitive abstractions.
Awareness is visceral, concrete, low level sense experience - a
primary presentation rather than a representation.

I'm only using computer screens as an example, but if you extend the
example to include other human devices like an airliner or hospital,
those things still have to be filled with human beings to give them
human meaning. A computer autopiloting an empty plane in a post-
apolcalypse world devoid of life would only have electronic and
physical meaning - circuits pushing toward equilibrium, meaningless
bodies of mass hurtling through the atmosphere. It doesn't know what a
plane is. A hospital without any people is an archeological ruin, no
matter how many computers are still connected to it.

Meaning is not only conferred by interaction with the world, meaning
is the world. If you are a human, then your world is a world of human
meaning, which includes condensed reflections of all other meanings to
which our technologically extended neurology permits us access.


On Jul 8, 2:44 pm, meekerdb <meeke...@verizon.net> wrote:
> On 7/8/2011 5:46 AM, Craig Weinberg wrote:
>
> >> That's what I thought he said.  But I see no reason to suppose a UD is
> >> >  running, much less running without physics.  We don't know of any
> >> >  computation that occurs immaterially.
>
> > All computation occurs materially and immaterially. An abacus doesn't
> > count itself. You ultimately have to have a conscious interpreter to
> > signify any particular text as quantitatively meaningful. Unplug all
> > monitors from all computers and what do you have left? Expensive
> > paperweights.
>
> But the question is what makes a conscious interpreter conscious.  
> Would replacing part of your brain by artificial circuits that are
> computationally equivalent preserve your consciousness?  Your example of
> computers without monitors makes a good point, but one I think different
> from your intention.  Computation must have some meaning, at least
> implicitly.  Meaning is conferred by interaction with the world.  
> Computers with monitors interact rather narrowly via humans.  But
> consider a computer that runs the utilitIies in a hospital or flies an

Craig Weinberg

unread,
Jul 8, 2011, 10:17:06 PM7/8/11
to Everything List
> You assumptions are not enough clear so I never know if you talk of what is or of what seems to be.
I'm trying for 'what seems to be what is', since what is isn't
knowable and what seems to be doesn't matter if it doesn't reflect
what is.

> I limit the mystery to the numbers through the notion of machines and self-reference.
If you limit the mystery, then won't what you get back be defined by
how you have defined those limits?

> Consciousness content, like fear, can modify the matter distribution
> around. At a deeper level, we select the realities which support us
> since a long time (deep computation).
I think that's true or half true, but not even the most evolved lama
or enlightened yogi can fail to react to multiple bullets fired
through their head or a massive dose of cyanide.

> The problem is to relate them to third person sharable notions.
They can't be related except through direct neurological intervention.
There is never going to be a quantitative expression to bring the
color blue to a mind which is part of a brain that has never seen
blue. You can, however, potentially intervene upon the brain
electronically, perhaps simulate a conjoined twin connection, and
create a memory of blue. Blue cannot be described quantitatively
however. An electromagnetic wavelength is not a visual experience,
it's just a measurement of linear quantity.

>We can still doubt all the content of consciousness
Then why not doubt the doubt of all the content of consciousness?

>If a part remains not explainable, then it would be nice to have a meta-explanation for
> that. (and this happens with the logic of self-reference)
Not sure I'm following. The meta explanation is that physics and
perception are two sides of a coin which function in two very
different ways but they overlap in certain ways.

> > If you trust both perception and physics
>
> But that is exactly what we should not trust too much, and especially
> not take literally.
I think it's okay to trust them as long as you understand that the
trust you place in either direction has consequences. I want bridge
builders to take physics very seriously and I want artists to take
their perception very seriously. For myself, I want to be able to
focus on whatever frequencies along the continuum are most appropriate
for the context (sanity).

> I think you are bringing some identity thesis, which might force you
> to bring infinities in the picture to make it coherent. But You are
> not precise enough to make it appears.
Does this help? http://www.stationlink.com/art/dualism5.jpg

meekerdb

unread,
Jul 8, 2011, 10:44:30 PM7/8/11
to everyth...@googlegroups.com
On 7/8/2011 6:40 PM, Craig Weinberg wrote:
> Conscious is an informal term, so it depends how you want to use it. I
> think of consciousness as the top level meta-awareness of a hierarchy
> of levels which might be called awareness, perception, sensation, and
> detection, where another person's idea of consciousness would equate
> all of those terms. In my usage, the awareness that one is aware,
> which would include something like dreaming or hallucinating. A more
> medical use of the word might distinguish consciousness as the ability
> to respond to external stimuli or produce electrical activity in
> particular areas of the brain, etc.
>

I agree that there are different kinds and degrees of consciousness.
Also it seems that a lot of our thinking takes place with consciousness,
c.f. Poincare' effect.

> Replacing parts of the brain depends what the artificial circuits are
> made of. For them to be experienced as something like human
> consciousness then I think they would have to be made of biological
> tissue.

Why? Biological tissue is made out of protons, neutrons, and electrons
just like computer chips. Why should anything other than their
input/output function matter?


> Awareness isn't calculation, 'information', or
> 'interpretations'. Those are high-level cognitive abstractions.
> Awareness is visceral, concrete, low level sense experience - a
> primary presentation rather than a representation.
>

Just assertions. The question is whether something other than you can
have them?

> I'm only using computer screens as an example, but if you extend the
> example to include other human devices like an airliner or hospital,
> those things still have to be filled with human beings to give them
> human meaning. A computer autopiloting an empty plane in a post-
> apolcalypse world devoid of life would only have electronic and
> physical meaning - circuits pushing toward equilibrium, meaningless
> bodies of mass hurtling through the atmosphere. It doesn't know what a
> plane is. A hospital without any people is an archeological ruin, no
> matter how many computers are still connected to it.
>

A computer flying an airliner is not very smart, but it would know what
a runway is, what a storm is, the shape of the Earth. A computer that
runs a hospital would know whether there were patients, doctors, or nurses.

> Meaning is not only conferred by interaction with the world, meaning
> is the world. If you are a human, then your world is a world of human
> meaning, which includes condensed reflections of all other meanings to
> which our technologically extended neurology permits us access.
>

You beg the question by specifying "human meaning". Do you suppose that
there is something unique about humans, or can there be dog meaning and
fish meaning and computer meaning?

Brent

Kim Jones

unread,
Jul 9, 2011, 12:14:34 AM7/9/11
to everyth...@googlegroups.com
Indeed, why? Any talk of 'artificial circuits' might risk the patient saying 'No' to the doctor. I want real, digital circuits. Meat circuits are fine, though there might be something better. I mean, if something better than 'skin' comes along, I'll swap my skin for that. Probably need the brain upgrade anyway to read the new skin. You could even make me believe I had a new skin via the firmware in the brain upgrade. No need to change skin at all.

I could even sell you a brain upgrade that looked like it was composed of meat when in fact it was a bunch of something else. You only have to believe what your brain presents you.

Kim Jones

Bruno Marchal

unread,
Jul 9, 2011, 4:04:37 AM7/9/11
to everyth...@googlegroups.com
Dear Marty,


On 08 Jul 2011, at 18:46, m.a. wrote:

> Dear Bruno,
> Can you imagine any way to test whether a higher
> intelligence is monitoring the UD and occasionally modifying
> it? marty a.
>

As much as I can imagine a higher intelligence monitoring the prime
numbers, and occasionally modifying them. Like if sunday, numbers get
new divisors.

In another word: hardly. (just remember than the UD, and its running,
is part of arithmetic).

Best,

Bruno

>> To post to this group, send email to everything-
>> li...@googlegroups.com.


>> To unsubscribe from this group, send email to everything-li...@googlegroups.com
>> .
>> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
>> .
>
> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/

Craig Weinberg

unread,
Jul 9, 2011, 12:44:03 PM7/9/11
to Everything List
> Why? Biological tissue is made out of protons, neutrons, and electrons
> just like computer chips. Why should anything other than their
> input/output function matter?

A cadaver is made out of the same thing too. You could pump food into
it and fit it with an artificial gut, even give it a synthesized voice
to make pre-recorded announcements and string it up like a marionette.
That doesn't mean it's a person. Life does not occur on the atomic
level, it occurs on the molecular level. There may be a way of making
inorganic molecules reproduce themselves, but there's no reason to
believe that their sensation or cognition would be any more similar
than petroleum is to plutonium. The i/o function is only half of the
story.

> Just assertions. The question is whether something other than you can
> have them?

Why couldn't it? As you say, I am made of the same protons, neutrons,
and electrons as everything else. You can't have it both ways. Either
consciousness is a natural potential of all material phenomena or it's
a unique special case. In the former you have to explain why more
things aren't conscious, and the latter you have to explain why
consciousness could exist. My alternative is to see that everything
has a private side, which behaves in a sensorimotor way rather than
electromagnetic, so that our experience is a massive sensorimotor
aggregate of nested organic patterns.

> A computer flying an airliner is not very smart, but it would know what
> a runway is, what a storm is, the shape of the Earth. A computer that
> runs a hospital would know whether there were patients, doctors, or nurses.

Nah, a computer like that wouldn't know anything about runways,
storms, shapes, or Earth or whether there were patients, doctors, or
nurses. Computers are just mazes of semiconductors which know when
they are free to complete some circuits and not others. A computer
autopilot knows less what a plane is than a cat does. Computers are
automated microelectronic sculptures through which we compute human
sense. They have no actual sense of their own beyond microelectronic
sense.

> You beg the question by specifying "human meaning". Do you suppose that
> there is something unique about humans, or can there be dog meaning and
> fish meaning and computer meaning?

There is certainly something unique about humans in the minds of
humans. Of course there is dog meaning, fish meaning, liver cell
meaning, neuron meaning, DNA meaning, carbon meaning. There isn't
computer meaning though because it's only a computer to a person that
can use a computer. To a cat, it's just a warm box. A cat, however,
makes sense to mice as one thing (monster?), to humans as another
(pet? pest? lunch?), to fleas (home?). Etc. A computer is just a glove
for certain functions of our cognitive/cortical faculties.

Craig

Craig Weinberg

unread,
Jul 9, 2011, 12:58:10 PM7/9/11
to Everything List
Sure, it would be great to have improved synthetic bodies, but I have
no reason to believe that depth and quality of consciousness is
independent from substance. If I have an artificial heart, that
artificiality may not affect me as much as having an artificial leg,
however, an artificial brain means an artificial me, and that's a
completely different story. It's like writing a computer program to
replace computer users. You might find out that digital circuits are
unconscious by definition.

Craig Weinberg

unread,
Jul 9, 2011, 4:57:43 PM7/9/11
to Everything List
Consider that even though the internet is pretty complex, we don't
have to worry about it developing an allergy to it's users and locking
them out. Cartoon characters can't think or feel, regardless of how
faithfully they are illustrated. We should not confuse the capacity of
a living thing to have coherent experiences as a single entity with
the capacity of human consciousness to impose it's own coherence and
see it reflected in inorganic matter.

On Jul 9, 12:14 am, Kim Jones <kimjo...@ozemail.com.au> wrote:

Jason Resch

unread,
Jul 9, 2011, 6:08:09 PM7/9/11
to everyth...@googlegroups.com
On Sat, Jul 9, 2011 at 11:44 AM, Craig Weinberg <whats...@gmail.com> wrote:
> Why?  Biological tissue is made out of protons, neutrons, and electrons
> just like computer chips.  Why should anything other than their
> input/output function matter?

A cadaver is made out of the same thing too. You could pump food into
it and fit it with an artificial gut, even give it a synthesized voice
to make pre-recorded announcements and string it up like a marionette.
That doesn't mean it's a person. Life does not occur on the atomic
level, it occurs on the molecular level. There may be a way of making
inorganic molecules reproduce themselves, but there's no reason to
believe that their sensation or cognition would be any more similar
than petroleum is to plutonium. The i/o function is only half of the
story.

> Just assertions.  The question is whether something other than you can
> have them?

Why couldn't it? As you say, I am made of the same protons, neutrons,
and electrons as everything else. You can't have it both ways. Either
consciousness is a natural potential of all material phenomena or it's
a unique special case. In the former you have to explain why more
things aren't conscious, and the latter you have to explain why
consciousness could exist.

This is like having to argue why more atoms aren't alive.  The difference between a life form and a mixture of chunks of coal and water won't be found in comparing the chemicals, the difference is in their organization.  That is all that separates living matter from non-living matter.  Mechanism says the same thing regarding intelligent entities vs. non-intelligent entities.  It comes down to their organization, not any material difference.  Addition can be performed by collections of cells and by logic gates etched on silicon.

Most neurologists consider the retina part of the brain, since processing is performed there.  Could we not build an artificial retina which sent the right signals down the optic nerve and allow someone to see?  Such cyborgs already exist: http://www.cbsnews.com/8301-504763_162-20038162-10391704.html

 
My alternative is to see that everything
has a private side, which behaves in a sensorimotor way rather than
electromagnetic, so that our experience is a massive sensorimotor
aggregate of nested organic patterns.

> A computer flying an airliner is not very smart, but it would know what
> a runway is, what a storm is, the shape of the Earth.  A computer that
> runs a hospital would know whether there were patients, doctors, or nurses.

Nah, a computer like that wouldn't know anything about runways,
storms, shapes, or Earth or whether there were patients, doctors, or
nurses. Computers are just mazes of semiconductors which know when
they are free to complete some circuits and not others.

And brains are just gelatinous tissue with cells squirting juices back and forth.  If you are going to use reductionism when talking about computers, then to be fair you must apply the same reasoning when talking about minds and brains.

 
A computer
autopilot knows less what a plane is than a cat does. Computers are
automated microelectronic sculptures through which we compute human
sense. They have no actual sense of their own beyond microelectronic
sense.

> You beg the question by specifying "human meaning".  Do you suppose that
> there is something unique about humans, or can there be dog meaning and
> fish meaning and computer meaning?

There is certainly something unique about humans in the minds of
humans. Of course there is dog meaning, fish meaning, liver cell
meaning, neuron meaning, DNA meaning, carbon meaning. There isn't
computer meaning though because it's only a computer to a person that
can use a computer.

Do you need another person to look at and interpret the firings of neurons in your brain in order for there to be meaning for your thoughts?  If not, why must be a user of the computer to impart meaning to its states?

 Jason

Craig Weinberg

unread,
Jul 9, 2011, 8:42:00 PM7/9/11
to Everything List
> The difference between a life form and a mixture of chunks of coal and water won't be found
> in comparing the chemicals, the difference is in their organization. That
> is all that separates living matter from non-living matter

Organization is only part of it. You could try to to make DNA out of
something else - substituting sulfur for carbon for instance, and it
won't work. It goes beyond mathematical considerations, since there is
nothing inherently golden about the number 79 or carbon-like about the
number 6. We can observe that in this universe these mathematical
organizations correlate with particular behaviors and qualities, but
that doesn't mean that they have to, in all possible universes,
correlate in that way. Mercury could look gold to us instead. Life
could be based on boron. In this universe, however, there is no such
thing as living matter, there are only living tissues. Cells. Not
circuits.

> Could we not build an artificial retina which sent the right signals down the optic nerve and allow someone to see?

Sure, but it's still going to be a prosthetic antenna. You can
replicate the physical inputs from the outside world but you can't
necessarily replicate the psychic outputs from the visual cortex to
the conscious Self. It's no more reasonable than expecting the
fingernails on an artificial hand to continue to grow and need
clipping. We don't have the foggiest idea how to create a new primary
color from scratch. IMO, until we can do that - one of the most
objective and simple examples of subjective experience, we have no
hope of even beginning to synthesize consciousness from inorganic
materials.

>And brains are just gelatinous tissue with cells squirting juices back and
>forth. If you are going to use reductionism when talking about computers,
>then to be fair you must apply the same reasoning when talking about minds
>and brains.

Exactly. If we didn't know for a fact that our brain was hosting
consciousness through our first hand experience there would be
absolutely no way of suspecting that such a thing could exist. This is
what I'm saying about the private topology of the cosmos. We can't
access it directly because we are stuck in our own private topology.

So to apply this to computers and planes - yes they could have a
private topology, but judging from their lack of self-motivated
behaviors, it makes more sense to think of them in terms of purely
structural and electronic interiority rather than imagining that their
assembly into anthropological artifacts confer some kind of additional
subjectivity.

A living cell is more than the sum of it's parts. A dead cell is made
of the same materials with the same organization as a living cell, it
just doesn't cohere as an integrated cell anymore, so lower level
processes overwhelm the whole. Decay is entropy for a body or a piece
of fruit, but a bonanza of biological negentropy for bacteria and
insects.

>Do you need another person to look at and interpret the firings of neurons
>in your brain in order for there to be meaning for your thoughts? If not,
>why must be a user of the computer to impart meaning to its states?

I'm not saying that there is no meaning to the states of
semiconductors acting in concert within a microprocessor, I'm just
saying that it's likely to be orders of magnitude more primitive than
organic life. To me, it's obvious that the interior experience of
neurons firing is the important, relevant phenomenon while the neuron
side is the generic back end.

Since computers are a reflection of our own cognitive abilities rather
than a self-organizing phenomenon, their important, relevant phenomena
are the signifying side which faces the user. The guts of the computer
are just means to an end. They don't know that they are computers, and
they never will. Computation is not awareness. If it were, you could
invent a new primary color simply by having someone understand a
formula. It's a category error to conflate the two.

Craig

btw, these ideas are not what I have always believed. I have been
thinking about these issues all of my life. My original orientation
was as a strict materialist, so I know very well how to make sense out
of the world that way. I'm just saying that it's missing half of the
story based on an idea of the self as a transparent logical entity
separate from the cosmos that it observes, which is not. Consciousness
is an extension of perception and awareness, not a disembodied logical
essence. Logic is metaphysical. Sense is physics.

On Jul 9, 6:08 pm, Jason Resch <jasonre...@gmail.com> wrote:

Jason Resch

unread,
Jul 9, 2011, 10:02:35 PM7/9/11
to everyth...@googlegroups.com
On Sat, Jul 9, 2011 at 7:42 PM, Craig Weinberg <whats...@gmail.com> wrote:
> The difference between a life form and a mixture of chunks of coal and water won't be found
> in comparing the chemicals, the difference is in their organization.  That
> is all that separates living matter from non-living matter

Organization is only part of it.

How is it you are so sure that the organization is only part of it?
 
You could try to to make DNA out of
something else - substituting sulfur for carbon for instance, and it
won't work.

Sulfur is not functionally equivalent to carbon, it will behave differently and thus it is not the same organization.
 
It goes beyond mathematical considerations, since there is
nothing inherently golden about the number 79 or carbon-like about the
number 6. We can observe that in this universe these mathematical
organizations correlate with particular behaviors and qualities, but
that doesn't mean that they have to, in all possible universes,
correlate in that way. Mercury could look gold to us instead. Life
could be based on boron. In this universe, however, there is no such
thing as living matter, there are only living tissues. Cells. Not
circuits.

The special thing about carbon is that it has four free electrons to use to bond with other atoms (it serves as a glue for holding large molecules together).  While Silicon also has 4 free electrons, it is much larger, and doesn't hide away between the atoms it is holding together, it would get in the way.  Anything that behaves like a carbon atom in all the same ways could serve as a replacement for the carbon atom, it wouldn't have to be carbon.  For example, lets say we discovered a new quark that could be put together into a super proton with a positive charge of 3, and also it had the mass of 3 protons.  A nucleus made of two of these super protons and six neutrons could not rightfully be called carbon, yet it would have the same mass and chemical properties, and the same electron shells.  Do you think it would be impossible to make a life form using these particles in place of carbon (assuming they behaved the same in all the right conditions) or is there something special about the identity of carbon?
 

> Could we not build an artificial retina which sent the right signals down the optic nerve and allow someone to see?

Sure, but it's still going to be a prosthetic antenna.

No, it is more than an antenna.  The retina does processing.  I chose the retina example as opposed to replacing part of the optic nerve precisely because the retina is more than an antenna.
 
You can
replicate the physical inputs from the outside world but you can't
necessarily replicate the psychic outputs from the visual cortex to
the conscious Self.

So the "psychic outputs" from the retina are reproducible, but not those of the visual cortex?  Why not?  The idea of these psychic outputs sounds somewhat like substance dualism or vitalism.
 
It's no more reasonable than expecting the
fingernails on an artificial hand to continue to grow and need
clipping. We don't have the foggiest idea how to create a new primary
color from scratch.

We have done this to monkeys already: http://www.guardian.co.uk/science/2009/sep/16/colour-blindness-monkeys-gene-therapy
The interesting thing is that the brain was apparently able to automatically adapt to the new signals received from the retina and process it for what it was, a new primary color input.  It only took the brain five months or so to rewire itself to process this new color.

"It was as if they woke up and saw these new colours. The treated animals unquestionably responded to colours that had been invisible to them," said Jay Neitz, a co-author on the study at the University of Washington in Seattle.

It is even thought that some small percentage of women see four primary colors: http://www.post-gazette.com/pg/06256/721190-114.stm
It is not that they have different genes for processing colors differently, they just have genes for a fourth type of light-sensitive cone, their brain software adapts accordingly.  (Just as those with color blindness do not have defective brains)
 
IMO, until we can do that - one of the most
objective and simple examples of subjective experience, we have no
hope of even beginning to synthesize consciousness from inorganic
materials.

I think it is wrong to say the subjective visual experience is simple.  It seems simple to us, but it has gone through massive amounts of processing and filters before you are made aware of it.  Some 30% of the gray matter in your brain is used to process visual data.

Given that, I would argue we have already implemented consciousness in in-organic materials.  Consider that Google's self driving cars must discriminate between red and green street lights.  Is the self-driving car not aware of the color the street light is?
 

>And brains are just gelatinous tissue with cells squirting juices back and
>forth.  If you are going to use reductionism when talking about computers,
>then to be fair you must apply the same reasoning when talking about minds
>and brains.

Exactly. If we didn't know for a fact that our brain was hosting
consciousness through our first hand experience there would be
absolutely no way of suspecting that such a thing could exist.

I am not as certain of that as you are.  Imagine some alien probe came down to earth and observed apes pointing at a piece of red fruit up in a tree amongst many green leaves.  The probe might conclude that the ape was conscious of the fruit and has the awareness of different frequencies of light.  Then again, if you define consciousness out of the universe entirely, there would be no way we could suspect anything because there would be nothing we could do at all.
 
This is
what I'm saying about the private topology of the cosmos. We can't
access it directly because we are stuck in our own private topology.

So to apply this to computers and planes - yes they could have a
private topology, but judging from their lack of self-motivated
behaviors,

Check out the program "Smart Sweepers" on this page: http://www.ai-junkie.com/ann/evolved/nnt1.html
You will find software for evolving neural network based brains, which control behaviors of little robots on a plane searching for food.  They start off completely dumb, most running around in circles, but after a few hundred generations become quite competent, and after a few thousand I've even observed what could be described as social behavior (they all travel in the same direction and never turn around backwards if they miss a piece of food), when I first saw this I was completely shocked, I would not have guessed this behavior would result even though I understood how the program worked.  There is, however, an individual survival benefit from following the group movement rather than traveling against the grain (the waves of bots clear out food in a wave, and going against the grain you would get lucky far less often than traveling with it).  Note: Press the "F" key to accelerate the evolution rather than animating it in its entirety when you get bored watching the performance of each generation.

The bots, and their evolution is self-directed.  You will find no code in the source files indicating how to find food, or how if they all travel in one pack, members of the group will individually benefit.  This is a surprising result which the computer found, it was not programmed in, nor was the computer told to do it.  The evolved bots movement, in their search for food can also be said to be self-directed, it is as much as movement in a bacterium or insect is in its search for food.  You might go so far as to say each bot IS conscious of the closest piece of food (that information is fed into the neural network of each bot).  Whether or not a computer exhibits self motivated behaviors is a matter of its programming., you couldn't say your word processor is very self-directed, for example.
 
it makes more sense to think of them in terms of purely
structural and electronic interiority rather than imagining that their
assembly into anthropological artifacts confer some kind of additional
subjectivity.

A living cell is more than the sum of it's parts. A dead cell is made
of the same materials with the same organization as a living cell, it
just doesn't cohere as an integrated cell anymore, so lower level
processes overwhelm the whole.

I would say death, like unconsciousness is the failure of higher-level processes.  It is when you stop breathing (a high level process) that causes mitochondria to stop producing ATP, which causes most other reactions in cells to cease.  Likewise, anesthetical chemicals cause very little difference in the operation of brain cells at the lower levels, but globally, nerve signals won't travel as far, and different brain regions become isolated from each other.  Thus the brain still looks like it is alive and functioning but there is no consciousness.  You cannot look at a conscious computer at the level of the silicon chip, by far most of the complexity is in the memory of the computer.  If we were to talk about a computer program with the same complexity as the human brain, it might have many Petabytes (10^15 bytes) worth of memory.  The processor, or processors serve only to provide a basic ruleset for relating the various structures that exist in memory, just as the laws of physics do for our biological brains.  Compared to our brains, the laws of physics look very simple, just like the architecture of any computer's CPU looks very simple compared to the in-memory representation of a brain.  This is the mistake Searle made when he said as the rule-follower he wouldn't understand a thing, there is very little complexity in the following of the rules, the computer more than just the CPU, it is the tape also.
 
Decay is entropy for a body or a piece
of fruit, but a bonanza of biological negentropy for bacteria and
insects.

>Do you need another person to look at and interpret the firings of neurons
>in your brain in order for there to be meaning for your thoughts?  If not,
>why must be a user of the computer to impart meaning to its states?

I'm not saying that there is no meaning to the states of
semiconductors acting in concert within a microprocessor, I'm just
saying that it's likely to be orders of magnitude more primitive than
organic life.

Again, I think you may be confusing the processor for the computer.  The comexity of a life is bounded by the amount of information it takes to represent it, which roughly corresponds to how many base pairs its DNA has (2 bits for each one) but there is a high degree of compressibility.  Human DNA can be compressed to about 25 MB, and with about half the genes describing the human brain in particular.  Thus the initial human infant brain could be described in about 12.5 MB or (100 million bits) of information.  Obviously through interaction with the environment, it becomes much more complex, and requires more bits to describe, but my point here is that there are data sets and programs much larger in size than that necessary to describe organic life.  I agree with you, almost any life is more complex than the processor, instruction sets are typically small.  The Java Virtual Machine, for example, has only 256 possible instructions.  It knows how to do only a finite set of 256 different things.  Yet the number of possible Java programs is infinite, a program can be implemented to do just about anything (despite these limited instruction set of its processor).
 
To me, it's obvious that the interior experience of
neurons firing is the important, relevant phenomenon while the neuron
side is the generic back end.

Since computers are a reflection of our own cognitive abilities rather
than a self-organizing phenomenon, their important, relevant phenomena
are the signifying side which faces the user. The guts of the computer
are just means to an end. They don't know that they are computers, and
they never will.

Dan Dennett said this belief is only a prejudice against non-neuron minds.
 
Computation is not awareness. If it were, you could
invent a new primary color simply by having someone understand a
formula. It's a category error to conflate the two.

This is Searle's error again.  The person would have to BE the new formula to experience the new color, not have a meta-understanding of the formula.  Searle was not the mind he was processing the rules for, he was just the CPU, so of course he did not experience things as the mind itself.  The laws of physics are determining what your brain does, yet we don't say the laws of physics know what it is like to be you, only you know what it is like to be you.

Jason
 

Craig Weinberg

unread,
Jul 10, 2011, 1:29:19 AM7/10/11
to Everything List
> How is it you are so sure that the organization is only part of it?

Because it makes sense to me that organization cannot create functions
which are not inherent potentials of whatever it is you are
organizing. It doesn't matter how many ping pong balls you have or how
you organize them, even if you put velcro or grease on them, you're
not going to ever get a machine that feels or thinks or tries to kill
you when you threaten it's organization. Life or consciousness does
not follow logically from mechanical organizations of any kind. Those
qualities can only be perceived by a subjective participant.

> Sulfur is not functionally equivalent to carbon, it will behave differently
> and thus it is not the same organization.

That's why I'm saying that to assume inorganic matter will behave in a
way that is functionally equivalent to organic cells, let alone
neurological networks, is not supported by any evidence. I think it's
a fantasy. Just because we can make a puppet seem convincingly
anthropomorphic to us doesn't mean that it can feel something.

> Do you think it
> would be impossible to make a life form using these particles in place of
> carbon (assuming they behaved the same in all the right conditions) or is
> there something special about the identity of carbon?

There is only something special about the identity of carbon because
organic chemistry relies upon it to perform higher level biochemical
acrobatics. There's no logical reason why sentience should occur in
one molecular arrangement and not another if you were designing a
cosmos from scratch. You could make a universe that makes sense where
noble gases stack up like cells and write symphonies. Consciousness
makes no more sense in a strictly physical universe than would time
travel, teleportation, or omnipotence. Less actually. Those magical
kinds of categories are at least variations on physical themes,
whereas feeling and awareness are wholly unprecedented and impossible
under purely mathematical and physical definitions. There is simply no
place for subjectivity to take place.

> No, it is more than an antenna. The retina does processing. I chose the
> retina example as opposed to replacing part of the optic nerve precisely
> because the retina is more than an antenna.

A living retina is more than an antenna because it is composed of a
microbiological community of living cells. An electronic retina is a
prosthetic extension of the optic nerve that may or may not serve as a
functional equivalent to the person using it. Just as a prosthetic
limb may be the functional equivalent in whatever ways it's designer
deems feasible, important, etc, it doesn't mean that it's the same
thing, even if we can't consciously tell the difference.

Who knows, it may turn out that someone with an artificial eye has
more emotional distance toward the images they see, or maybe they will
have enhanced acuity for certain categories of things and not others,
etc. It's still not like replacing someone's amygdala or something.

> So the "psychic outputs" from the retina are reproducible, but not those of
> the visual cortex? Why not? The idea of these psychic outputs sounds
> somewhat like substance dualism or vitalism.

With the retina (or the cochlea, skin receptors, olfactory bulb, etc)
you are dealing with specialized tissues which, IMO, have concentrated
and centralized the sensorimotor functions inherent in all animal
cells into an organ for the larger organism. As such, their i/o is
more isomorphic to the physical phenomena they are interfacing with.
As with all tissues in the nervous system, they play a dual role,
subjugating their own psychic output as single celled organisms and
animal tissues to some degree in order to facilitate a psychic i/o at
the organism level. A nervous system is like an organism within an
organism. So yes, the output of the retina that we make sense of can
be reproduced, but you're not fooling the rest of the nervous system
and body.

>The interesting thing is that the brain was apparently able to automatically
>adapt to the new signals received from the retina and process it for what it
>was, a new primary color input.

Making existing colors accessible to an individual monkey or person's
nervous system is completely different from inventing a new primary
color in the universe. Even tetrachromats do not perceive a new
primary color, they just perceive finer distinction between existing
hue combinations. Not that a new color couldn't be achieved
neurologically, maybe it could, but we have no idea how to conceive of
what that color could look like. We can't think of a replacement for
yellow. We don't know where yellow comes from, or what it's made of,
or what other possible spectrum could be created. It's literally
inconceivable, like a square circle, not a matter of technical skill,
but an understanding that color is a visual feeling that has no
mechanical logic which invokes it by necessity. It has it's own logic
which is just as fundamental as the elements of the periodic table,
and not reducible to physical phenomena.

>I think it is wrong to say the subjective visual experience is simple. It
>seems simple to us, but it has gone through massive amounts of processing
>and filters before you are made aware of it.

If it seems simple to us, so simple that an infant can relate to them
even before they can grasp numbers or letters, that would have to be
explained. There is a lot of technology behind this conversation as
well, but it doesn't mean these words are a complex technology. From
my perspective, the view you are investing in is west-of-center, in
the sense that it compels us to privilege third person views of first
person phenomena, which I think is sentimental and unscientific. First
person phenomena are legitimate, causally efficacious manifestations
in the cosmos having properties and functions which cannot be
meaningfully defined in strictly physical, objective terms.

> Is the self-driving car
> not aware of the color the street light is?

No way. It's not aware of anything. The sensitivity of the ccd to
optical changes in the environment drives electronic changes in the
chips but that's as far as it goes. Nothing is felt or known, it's
just unconsciously reported through a sophisticated program.

>Then again, if you define consciousness out of the universe
>entirely, there would be no way we could suspect anything because there
>would be nothing we could do at all.

Right, but just the sake of argument, let's say that there were some
other way of analyzing the universe without consciousness. What I'm
saying is that there would be no hint of any interior dimension such
as we experience in every waking moment. Even if the analysis could
detect the kinds of patterns and behaviors we are familiar with (which
it wouldn't), the idea of consciousness itself just would not follow
from observing a living brain any more than a brain coral or a dead
brain.

>You will find software for evolving neural network based brains...

Sure, we can definitely make artificial patterns which reflect
intelligence, which behave intelligently, but they still don't feel
anything or care about their own existence. They have no subjective
interiority, they are just automatic patterns. Part of what we are is
just like that. Our bodies are evolving genetic robots, but that's
only half of what we are. The other half is equally interesting but
not as reducible to quantified variables.

>You cannot look at a
>conscious computer at the level of the silicon chip, by far most of the
>complexity is in the memory of the computer

Except that what the computer physically is can only be a collection
of silicon chips. It has no physical coherence of it's own. Without a
human interpreter, the entire contents of the memory is just a-
signifying groupings of magnetized cobalt alloy. There is no
independent sentience there. In the absence of electric current and a
conscious creature to interact with it, the computer is just an
unusual collection of minerals.

Sorry if I sound rude or anything, I'm not trying to be argumentative.
You're being very civil and knowledgeable, and I appreciate that. I'm
just naturally wordy and obnoxious on this subject. It's what I do
most of my blogging about (http://s33light.org).

On Jul 9, 10:02 pm, Jason Resch <jasonre...@gmail.com> wrote:
> We have done this to monkeys already:http://www.guardian.co.uk/science/2009/sep/16/colour-blindness-monkey...
> ...
>
> read more »

Jason Resch

unread,
Jul 10, 2011, 3:07:14 AM7/10/11
to everyth...@googlegroups.com
On Sun, Jul 10, 2011 at 12:29 AM, Craig Weinberg <whats...@gmail.com> wrote:
> How is it you are so sure that the organization is only part of it?

Because it makes sense to me that organization cannot create functions
which are not inherent potentials of whatever it is you are
organizing. It doesn't matter how many ping pong balls you have or how
you organize them, even if you put velcro or grease on them, you're
not going to ever get a machine that feels or thinks or tries to kill
you when you threaten it's organization. Life or consciousness does
not follow logically from mechanical organizations of any kind. Those
qualities can only be perceived by a subjective participant.

In theory it is possible to build a computer using some system of ping pong balls.
(Here is a cool example of what a component of such a computer might look like: http://www.youtube.com/watch?v=GcDshWmhF4A )

Now imagine we link 100,000,000,000 of these ping pong ball based computers together, each one capable of processing signals received from other linked computers and sending out signals at a rate of 1000 per second.  Each of these computers is connected to around 10,000 other such computers.  It is beyond the ability of the human mind to fathom something of this complexity, but seeing something this complex (about as complex as the brain) makes it a little easier to accept by intuition, that a mind based on ping pong balls is possible.  When we typically try to imagine a ping-pong ball mind, we have trouble picturing more than a few hundred ping pong balls, but if you are to approach a mind as complex as the brain, you would need some thousand thousand thousand thousand thousand transactions per second.  I don't think we can say what is or what wouldn't be possible with a machine of these complexity; all machines we have built to date are primitive and simplistic by comparison.  The machines we deal with day to day don't usually do novel things, exhibit creativity, surprise us, etc. but I think a machine as complex as the human brain could do these things regularly.
 

> Sulfur is not functionally equivalent to carbon, it will behave differently
> and thus it is not the same organization.

That's why I'm saying that to assume inorganic matter will behave in a
way that is functionally equivalent to organic cells, let alone
neurological networks, is not supported by any evidence. I think it's
a fantasy. Just because we can make a puppet seem convincingly
anthropomorphic to us doesn't mean that it can feel something.

If one day humans succeeded in reverse engineering a brain, and executed it on a super computer, and it told you it was conscious and alive, and did not want to be turned off, would this convince you or would you believe it was only being mimicking something that could feel something?  If not, it seems there would be no possible evidence that could convince you.  Is that true?
 

> Do you think it
> would be impossible to make a life form using these particles in place of
> carbon (assuming they behaved the same in all the right conditions) or is
> there something special about the identity of carbon?

There is only something special about the identity of carbon because
organic chemistry relies upon it to perform higher level biochemical
acrobatics. There's no logical reason why sentience should occur in
one molecular arrangement and not another if you were designing a
cosmos from scratch.

I believe this is what computers allow us to do: explore alternate universes by defining new sets of logical rules. 
 
You could make a universe that makes sense where
noble gases stack up like cells and write symphonies. Consciousness
makes no more sense in a strictly physical universe than would time
travel, teleportation, or omnipotence. Less actually. Those magical
kinds of categories are at least variations on physical themes,
whereas feeling and awareness are wholly unprecedented and impossible
under purely mathematical and physical definitions. There is simply no
place for subjectivity to take place.

> No, it is more than an antenna.  The retina does processing.  I chose the
> retina example as opposed to replacing part of the optic nerve precisely
> because the retina is more than an antenna.

A living retina is more than an antenna because it is composed of a
microbiological community of living cells. An electronic retina is a
prosthetic extension of the optic nerve that may or may not serve as a
functional equivalent to the person using it. Just as a prosthetic
limb may be the functional equivalent in whatever ways it's designer
deems feasible, important, etc, it doesn't mean that it's the same
thing, even if we can't consciously tell the difference.

Who knows, it may turn out that someone with an artificial eye has
more emotional distance toward the images they see, or maybe they will
have enhanced acuity for certain categories of things and not others,
etc. It's still not like replacing someone's amygdala or something.

Neural prostheses will be common some day, Thomas Berger has spent the past decade reverse engineering the hippocampus:
http://www.popsci.com/scitech/article/2007-04/memory-hacker

Down the hall, Berger rises to greet me in his office. An imposing man with a shock of gray hair, Berger, 56, has the thick build of an aging athlete and the no-nonsense manner of a CEO. Can a chunk of silicon really stand in for brain cells? I ask. "I don't need a grand theory of the mind to fix what is essentially a signal-processing problem," he says. "A repairman doesn't need to understand music to fix your broken CD player."

 
 

> So the "psychic outputs" from the retina are reproducible, but not those of
> the visual cortex?  Why not?  The idea of these psychic outputs sounds
> somewhat like substance dualism or vitalism.

With the retina (or the cochlea, skin receptors, olfactory bulb, etc)
you are dealing with specialized tissues which, IMO, have concentrated
and centralized the sensorimotor functions inherent in all animal
cells into an organ for the larger organism. As such, their i/o is
more isomorphic to the physical phenomena they are interfacing with.
As with all tissues in the nervous system, they play a dual role,
subjugating their own psychic output as single celled organisms and
animal tissues to some degree in order to facilitate a psychic i/o at
the organism level. A nervous system is like an organism within an
organism. So yes, the output of the retina that we make sense of can
be reproduced, but you're not fooling the rest of the nervous system
and body.

It seems to me a little too convenient, that the same biological material which is needed for self-reproducing cells, just happens to be the only viable substrate which can support consciousness.  If only one possible substrate is possible in any given universe, why do you think it just so happens to line up with the same materials which serve a biological function?  Do you subscribe to anthropic reasoning?

 

>The interesting thing is that the brain was apparently able to automatically
>adapt to the new signals received from the retina and process it for what it
>was, a new primary color input.

Making existing colors accessible to an individual monkey or person's
nervous system is completely different from inventing a new primary
color in the universe.

Primary colors aren't physical properties, they are purely mental constructions.  There are shrimp which can see something like 16 different primary colors.  It is a factor of the dimensionality of the inputs the brain has to work with when generating the environment you believe yourself to be in.
 
Even tetrachromats do not perceive a new
primary color, they just perceive finer distinction between existing
hue combinations.

The reason we say there are 3 primary colors, and TV screens have pixels of three different colors is because most humans have three different types of color sensitive cones in their eye.  Each cone cell can distinguish between 100 or so intensity levels, thus someone with red green color blindness can only see 100*100 different colors.  A typical human can see about a million 100*100*100, while a tetrachromat could distinguish between 100*100*100*100 colors.
 
Not that a new color couldn't be achieved
neurologically, maybe it could, but we have no idea how to conceive of
what that color could look like.

I agree we can't really conceive of these new colors without having a mind capable of representing them.  Perhaps there will be gene therapy in the near future which will allow any human to become a tetrachromat, and then like the monkeys you will one day wake up and see entirely novel colors. Hopefully people won't suddenly appear ugly to you after that switch occurs!
 
We can't think of a replacement for
yellow. We don't know where yellow comes from, or what it's made of,
or what other possible spectrum could be created. It's literally
inconceivable, like a square circle, not a matter of technical skill,
but an understanding that color is a visual feeling that has no
mechanical logic which invokes it by necessity. It has it's own logic
which is just as fundamental as the elements of the periodic table,
and not reducible to physical phenomena.

For fun, see if you can have any success with this:
http://en.wikipedia.org/wiki/Opponent_process#Reddish_green_and_yellowish_blue
http://en.wikipedia.org/wiki/Impossible_colors
By putting red light into one eye and green light into another with the same level of brightness, some people have reported experiencing "impossible" colors, such as reddish green and yellowish blue.
 

>I think it is wrong to say the subjective visual experience is simple.  It
>seems simple to us, but it has gone through massive amounts of processing
>and filters before you are made aware of it.

If it seems simple to us, so simple that an infant can relate to them
even before they can grasp numbers or letters, that would have to be
explained. There is a lot of technology behind this conversation as
well, but it doesn't mean these words are a complex technology. From
my perspective, the view you are investing in is west-of-center, in
the sense that it compels us to privilege third person views of first
person phenomena, which I think is sentimental and unscientific. First
person phenomena are legitimate, causally efficacious manifestations
in the cosmos having properties and functions which cannot be
meaningfully defined in strictly physical, objective terms.

I think they are informational, rather than physical, but I tend to agree it may not be communicable without instantiating the same patterns in your own mind, or rewiring one's own brain to have the experience of someone else.
 

> Is the self-driving car
> not aware of the color the street light is?

No way. It's not aware of anything.

How does it know to stop at a red light if it is not aware of anything?
 
The sensitivity of the ccd to
optical changes in the environment drives electronic changes in the
chips but that's as far as it goes. Nothing is felt or known, it's
just unconsciously reported through a sophisticated program.

Someone could say the same thing about you, could they not?  Without being the self-driving car, you really can't assert that it is aware of nothing.
 

The human brain doesn't have a tiny homunculus inside of it watching a projector screen of conscious thought, the brain itself is a system which provides its own meaning and interpretation.  Likewise, a word processor distinguishes incorrectly spelled words from correctly spelled words whether or not someone is looking at or using the word processor.  Surely, the meaning to a person of a misspelled word is different from the meaning to the word-processor, yet there is still a distinction, and there is internal meaning between the status of correctly spelled vs. incorrectly spelled which affects the state of the word processor.
 
There is no
independent sentience there. In the absence of electric current and a
conscious creature to interact with it, the computer is just an
unusual collection of minerals.

You could say that about just about anything, a person, a city, the Earth, that they are all just unusual collections of minerals, but that misses a lot of the finer points.
 

Sorry if I sound rude or anything, I'm not trying to be argumentative.
You're being very civil and knowledgeable, and I appreciate that.

I don't think you are rude or argumentative.  I appreciate the opportunity for a debate. :-)

Jason
 

--

Bruno Marchal

unread,
Jul 9, 2011, 2:35:50 PM7/9/11
to everyth...@googlegroups.com

On 09 Jul 2011, at 18:58, Craig Weinberg wrote:

> Sure, it would be great to have improved synthetic bodies, but I have
> no reason to believe that depth and quality of consciousness is
> independent from substance. If I have an artificial heart, that
> artificiality may not affect me as much as having an artificial leg,
> however, an artificial brain means an artificial me, and that's a
> completely different story. It's like writing a computer program to
> replace computer users. You might find out that digital circuits are
> unconscious by definition.


You might find out that molecules in brain are unconscious too.
What in the brain would be not Turing emulable? You need to speculate
on a new physics, or on the fact that a brain would be a very special
analogical infinite machine. Why not?
You might still appreciate my point. I don't think that today someone
shown that comp leads to a contradiction, but comp leads to a
reappraisal of the relation between first person and 3 person, or, at
some other level, of consciousness and matter, and this in a testable
way.
But there is no problem with what you say. If you believe in
physicalism, then indeed mechanism is no more an option.
In my opinion, mechanism is more plausible than physicalism, and also
more satisfactory in explaining where the "illusion" of matter come
from. Actually I don't know of any other explanation.

Bruno


>
> On Jul 9, 12:14 am, Kim Jones <kimjo...@ozemail.com.au> wrote:
>> Indeed, why? Any talk of 'artificial circuits' might risk the
>> patient saying 'No' to the doctor. I want real, digital circuits.
>> Meat circuits are fine, though there might be something better. I
>> mean, if something better than 'skin' comes along, I'll swap my
>> skin for that. Probably need the brain upgrade anyway to read the
>> new skin. You could even make me believe I had a new skin via the
>> firmware in the brain upgrade. No need to change skin at all.
>>
>> I could even sell you a brain upgrade that looked like it was
>> composed of meat when in fact it was a bunch of something else. You
>> only have to believe what your brain presents you.
>>
>> Kim Jones
>>
>> On 09/07/2011, at 12:44 PM, meekerdb wrote:
>>
>>
>>
>>
>>
>>
>>
>>>> Replacing parts of the brain depends what the artificial circuits
>>>> are
>>>> made of. For them to be experienced as something like human
>>>> consciousness then I think they would have to be made of biological
>>>> tissue.
>>
>>> Why? Biological tissue is made out of protons, neutrons, and
>>> electrons just like computer chips. Why should anything other
>>> than their input/output function matter?
>

> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/

Craig Weinberg

unread,
Jul 10, 2011, 9:20:32 AM7/10/11
to Everything List
>You might find out that molecules in brain are unconscious too.

The fact that consciousness changes predictably when different
molecules are introduced to the brain, and that we are able to produce
different molecules by changing the content of our consciousness
subjectively suggests to me that it makes sense to give molecules the
benefit of the doubt.

>What in the brain would be not Turing emulable

Let's take the color yellow for example. If you build a brain out of
ideal ping pong balls, or digital molecular emulations, does it
perceive yellow from 580nm oscillations of electromagnetism
automatically, or does it see yellow when it's own emulated units are
vibrating on the functionally proportionate scale to itself? Does the
ping pong ball brain see it's own patterns of collisions as yellow or
does yellow = electromagnetic ~580nm and nothing else. At what point
does the yellow come in? Where did it come from? Were there other
options? Can there ever be new colors? From where? What is the minimum
mechanical arrangement required to experience yellow?

>You need to speculate
> on a new physics,

Yes, I do speculate on a new physics. I think that what we can
possibly see outside of ourselves is half of what exists. What we
experience is only a small part of the other half. Physics wouldn't
change, but it would be seen as the exterior half of a universal
topology. I did a post this morning that might help: http://s33light.org/post/7453105138

I do appreciate your point, and I think there is great value in
studying cognitive mechanics and pursuing AGI regardless of it's
premature assumption to lead to synthetic consciousness. I think that
physicalism and mechanism are both useful in their appropriate
contexts - the brain does have physical organization which determines
how consciousness develops, just as a cell phone or desktop determines
how the internet is presented. It's a bidirectional flow of influence.
We unknowingly affect the brain and the brain unknowingly affects us.
They are two intertwined but mutually ignorant topologies of the same
ontological coin.

Craig

Bruno Marchal

unread,
Jul 10, 2011, 11:32:36 AM7/10/11
to everyth...@googlegroups.com

On 10 Jul 2011, at 15:20, Craig Weinberg wrote:

>> You might find out that molecules in brain are unconscious too.
>
> The fact that consciousness changes predictably when different
> molecules are introduced to the brain, and that we are able to produce
> different molecules by changing the content of our consciousness
> subjectively suggests to me that it makes sense to give molecules the
> benefit of the doubt.

All right, but then honesty should force you to do the same with
computer ships. Unless you presuppose the molecules not being Turing
emulable.


>
>> What in the brain would be not Turing emulable
>
> Let's take the color yellow for example. If you build a brain out of
> ideal ping pong balls, or digital molecular emulations, does it
> perceive yellow from 580nm oscillations of electromagnetism
> automatically, or does it see yellow when it's own emulated units are
> vibrating on the functionally proportionate scale to itself? Does the
> ping pong ball brain see it's own patterns of collisions as yellow or
> does yellow = electromagnetic ~580nm and nothing else. At what point
> does the yellow come in? Where did it come from? Were there other
> options? Can there ever be new colors? From where? What is the minimum
> mechanical arrangement required to experience yellow?

Any mechanical arrangement defining a self-referentially correct
machine automatically leads the mechanical arrangement to distinguish
third person point of view and first person points of view. The
machine already have a theory of qualia, with an explanation of why
qualia and quanta seems different.


>
>> You need to speculate
>> on a new physics,
>
> Yes, I do speculate on a new physics. I think that what we can
> possibly see outside of ourselves is half of what exists.

I agree. But this is a consequence of comp, and it leads to a
derivation of physics from computer science/machine's theology. No
need to introduce any physics (old or new).

> What we
> experience is only a small part of the other half. Physics wouldn't
> change, but it would be seen as the exterior half of a universal
> topology. I did a post this morning that might help: http://s33light.org/post/7453105138

That's certainly *looks* like the arithmetical plotinian physics.
Again, you can extract it (or have to extract it for getting the
correct quanta/qualia) from computer science (actually from just
addition and multiplication and a small amount of logic).

>
> I do appreciate your point, and I think there is great value in
> studying cognitive mechanics and pursuing AGI regardless of it's
> premature assumption to lead to synthetic consciousness.

I don't really do that. I don't think that consciousness can be
created or be synthetic. It is not the product of any machine, natural
or artificial. Such machines only filter consciousness and select
relative partial realities. My main point is that this is testable. It
already explains non locality, indeterminacy, non-cloning of matter,
and some formal aspect of quantum mechanics.

> I think that
> physicalism and mechanism are both useful in their appropriate
> contexts -

Mechanism and physicalism are incompatible.

> the brain does have physical organization which determines
> how consciousness develops,

I do agree with this.


> just as a cell phone or desktop determines
> how the internet is presented. It's a bidirectional flow of influence.
> We unknowingly affect the brain and the brain unknowingly affects us.
> They are two intertwined but mutually ignorant topologies of the same
> ontological coin.

That is too vague. It can make sense in the computationalist theory.
yet the brain itself is a construct of the mind. Not the human mind
but the relative experience of the many universal numbers/
computational histories. This follows from the digital mechanist
hypothesis.

Bruno

meekerdb

unread,
Jul 10, 2011, 11:53:53 AM7/10/11
to everyth...@googlegroups.com
On 7/9/2011 9:44 AM, Craig Weinberg wrote:
>> Why? Biological tissue is made out of protons, neutrons, and electrons
>> > just like computer chips. Why should anything other than their
>> > input/output function matter?
>>
> A cadaver is made out of the same thing too. You could pump food into
> it and fit it with an artificial gut, even give it a synthesized voice
> to make pre-recorded announcements and string it up like a marionette.
> That doesn't mean it's a person. Life does not occur on the atomic
> level, it occurs on the molecular level.

Exactly. So it doesn't depend on the components. Then what does it
depend on? It depends on their arrangement and interaction. The
components at some low level, in this case atoms, are *not* alive. How
can cognition be any different?

> There may be a way of making
> inorganic molecules reproduce themselves, but there's no reason to
> believe that their sensation or cognition would be any more similar
> than petroleum is to plutonium. The i/o function is only half of the
> story.
>

So what's the other half? Do brains have to be made of special
conscious atoms?

>
>> > Just assertions. The question is whether something other than you can
>> > have them?
>>
> Why couldn't it? As you say, I am made of the same protons, neutrons,
> and electrons as everything else. You can't have it both ways. Either
> consciousness is a natural potential of all material phenomena or it's
> a unique special case. In the former you have to explain why more
> things aren't conscious, and the latter you have to explain why
> consciousness could exist. My alternative is to see that everything
> has a private side, which behaves in a sensorimotor way rather than
> electromagnetic, so that our experience is a massive sensorimotor
> aggregate of nested organic patterns.
>

What does it mean "sensorimotor way" mean. It sounds like the cognitive
analog of elan vital.

Brent


meekerdb

unread,
Jul 10, 2011, 11:59:58 AM7/10/11
to everyth...@googlegroups.com
On 7/9/2011 9:58 AM, Craig Weinberg wrote:
> Sure, it would be great to have improved synthetic bodies, but I have
> no reason to believe that depth and quality of consciousness is
> independent from substance. If I have an artificial heart, that
> artificiality may not affect me as much as having an artificial leg,
> however, an artificial brain means an artificial me, and that's a
> completely different story. It's like writing a computer program to
> replace computer users. You might find out that digital circuits are
> unconscious by definition.
>

But analog ones are? It is generally thought that any analog circuit
can be reproduced at any give level of precision by a digital circuit.
Bruno's idea depends on this being true. It is questionable though
because it may be the case that spacetime is truly a continuum:
http://www.newscientist.com/article/mg21128204.200-distant-light-hints-at-size-of-spacetime-grains.html
It's hard to believe though that the continuous nature of spacetime
would effect the function of brains. However, it would prevent the
digital simulation of large regions.

Brent

meekerdb

unread,
Jul 10, 2011, 3:05:11 PM7/10/11
to everyth...@googlegroups.com
On 7/9/2011 5:42 PM, Craig Weinberg wrote:
> A living cell is more than the sum of it's parts. A dead cell is made
> of the same materials with the same organization as a living cell,

That's not true. It's dead precisely because it doesn't have the same
organization.

Brent

Craig Weinberg

unread,
Jul 10, 2011, 9:52:20 PM7/10/11
to Everything List
>I don't think we can say what is or what wouldn't be possible with a machine of these
>complexity; all machines we have built to date are primitive and simplistic
>by comparison. The machines we deal with day to day don't usually do novel
>things, exhibit creativity, surprise us, etc. but I think a machine as
>complex as the human brain could do these things regularly.

I do think that we can say, with the same certainty that we cannot
create a square circle, that it would not be possible at any level of
complexity. It's not that they can't create novelty or surprise, it's
that they can't feel or care about their own survival. I'm saying that
the potential for awareness must be built in to matter at the lowest
level or not at all. Complexity alone cannot cause awareness in
inanimate objects, let alone the kind of rich, ididopathic phenomena
we think of as qualia. The waking state of consciousness requires no
more biochemical complexity to initiate than does unconsciousness. In
this debate, the idea of complexity is a red herring which, together
with probability acts as a veil of what I consider to be the religious
faith of promissory materialism.

> If one day humans succeeded in reverse engineering a brain, and executed it
> on a super computer, and it told you it was conscious and alive, and did not
> want to be turned off, would this convince you or would you believe it was
> only being mimicking something that could feel something? If not, it seems
> there would be no possible evidence that could convince you. Is that true?

The only thing that would come close to convincing me that a
virtualized brain was successful in producing human consciousness
would be if a person could live with half of their brain emulated for
a while, then switch to the other half emulated for a while and report
as to whether their memories and experiences of being emulated were
faithful. I certainly would not exchange my own brain for a computer
program based on the computer program's assessment of it's own
consciousness.

> I believe this is what computers allow us to do: explore alternate universes
> by defining new sets of logical rules.

Sure, but they can also blind us to the aspects of our own universe
which cannot ever be defined by any set of logical rules (such as the
experiential nature of qualia).

> Neural prostheses will be common some day, Thomas Berger has spent the past
> decade reverse engineering the hippocampus:http://www.popsci.com/scitech/article/2007-04/memory-hacker

Prostheses are great but you can't assume that you can replace the
parts of the brain which host the conscious self without replacing the
self. If you lose an arm or a leg, fine, but if you lose a head and a
body, you're out of luck. To save the arm and replace the head with a
cybernetic one is not the same thing. Even if you get a brain grown
from your own stem cells, it's not going to be you. One identical twin
is not a valid replacement for the other.

> If only one possible
> substrate is possible in any given universe, why do you think it just so
> happens to line up with the same materials which serve a biological
> function? Do you subscribe to anthropic reasoning?

I don't know that only one substrate is possible, and I don't
necessarily think that consciousness is unique to biology, I just
think that human consciousness in particular is an elaboration of
hominid perception, animal sense, and organic molecular detection. The
more you vary from that escalation, the more you should expect the
interiority to diverge from our own. It's not that we cannot build a
brain based on plastic and semiconductors, it's that we should not
assume that such a machine would be aware at all, just as a plastic
flower is not a plant. It looks enough like a plant to fool our casual
visual inspection, but for every other animal, plant, or insect, the
plastic flower is nothing like a plant at all. A plastic brain is the
same thing. It may make for a decent android to serve our needs, but
it's not going to be an actual person.

> Primary colors aren't physical properties, they are purely mental
> constructions. There are shrimp which can see something like 16 different
> primary colors. It is a factor of the dimensionality of the inputs the
> brain has to work with when generating the environment you believe yourself
> to be in.

They are phenomena present in the cosmos, just as a quark or galaxy
is. Labeling them mental constructions is just a way of disqualifying
them by appealing to metaphysical speculation. Mentally constructed
where? From what? How? Why can't we mentally construct new colors
ourselves? Even if you had seen red and blue, you could not in your
wildest imaginings or most rigorous quantitative expression conceive
of what it is to see yellow if you had never seen it. Yellow is not
just a bluer version of red, even though electromagnetically that is
exactly what it should be, it's different from either blue or red and
different in a self-explanatory, exquisitely signifying way. Shrimp
may not even see one color, let alone 16. They may just be able to
distinguish different qualities of grey. You're still not accepting
that color is not mechanical. It has no third party dimensionality. It
is either seen first hand or it does not exist. This is the way most
of the phenomena we experience and certainly the experiences we care
about work.

> some people have reported experiencing "impossible"
> colors, such as reddish green and yellowish blue.

Yeah I like that demo. It's not a new primary color though, that's
just contradictory mixing of familiar colors. I'm not talking about
reddish green, I'm talking about Xed, Yhite, and Zlue. Because if you
are going to assert that the spectrum is a mental construct then there
would need to be some explanation of how many such mental universals
can be constructed. Why not ten million completely and utterly novel
spectrums? How do you make them make sense internally so that you can
have complements and opposites, color wheels and additive vs
subtractive mixing palettes?

>I think they are informational, rather than physical, but I tend to agree it
>may not be communicable without instantiating the same patterns in your own
>mind, or rewiring one's own brain to have the experience of someone else.

I think that informational is metaphysical. It doesn't explain how the
effect is achieved. Imagine that color did not exist and you were
writing a program for a virtual world. How would you invent color? How
could you even conceive of the idea for it, it's like 'beef flavored
nineteen'. There's no information there, it's pure experiential sense.
Visual feeling. It is physical but it is the interior of physicality -
not electromagnetic, but the sensorimotive topology of the sense which
can be detected externally as electromagnetism. Color is how
electromagnetism feels to us, to our brains, nerves, retinas. This is
a really big deal to realize. It's a secret door to finding your own
existence in a world of materialistic reflections.

>How does it know to stop at a red light if it is not aware of anything?
It doesn't stop at a red light. The car stops at an electronic signal
that fires when the photosensor and it's associated semiconductors
match certain quantitative thresholds which correspond to what we see
as a red light. It has no idea there is a car or a light. It knows
silicon, boron, germanium, and what electricity feels like.


>The human brain doesn't have a tiny homunculus inside of it watching a
>projector screen of conscious thought, the brain itself is a system which
>provides its own meaning and interpretation.
It's not a tiny homunculus, it's us. Meaning is not provided by the
brain any more than movies are provided by a DVD player.

>Likewise, a word processor
>distinguishes incorrectly spelled words from correctly spelled words whether
>or not someone is looking at or using the word processor. Surely, the
>meaning to a person of a misspelled word is different from the meaning to
>the word-processor, yet there is still a distinction, and there is internal
>meaning between the status of correctly spelled vs. incorrectly spelled
>which affects the state of the word processor.

The word processor is just semiconductors which are activated and
control in a pattern we deem meaningful. There is no distinction for
the computer between correct or incorrect spelling, other than
different logic gates being held open or closed. It's just self
referential machine language and has no sense of linguistic
significance whatsoever. Computation by itself can only simulate
intelligence, it can't know any meaning, just as a recipe can't be
served as a meal.

>You could say that about just about anything, a person, a city, the Earth,
>that they are all just unusual collections of minerals, but that misses a
>lot of the finer points.

Right, because the finer points cannot be reduced to physical
mechanics or calculation, they must be experienced first hand.

Craig

On Jul 10, 3:07 am, Jason Resch <jasonre...@gmail.com> wrote:
> For fun, see if you can have any success with this:http://en.wikipedia.org/wiki/Opponent_process#Reddish_green_and_yello...http://en.wikipedia.org/wiki/Impossible_colors

Craig Weinberg

unread,
Jul 10, 2011, 10:17:32 PM7/10/11
to Everything List
>All right, but then honesty should force you to do the same with
>computer ships. Unless you presuppose the molecules not being Turing
>emulable.

Computer chips don't behave in the same way though. Your computer
can't become an ammoniaholic or commit suicide.The problem with
emulating molecules is that we are only emulating the side of the coin
we can see. The other side is blank and that's the side that
interiority and awareness is made of. We can add chips to our brain
though, or build a computer out of cells.

>Any mechanical arrangement defining a self-referentially correct
>machine automatically leads the mechanical arrangement to distinguish
>third person point of view and first person points of view. The
>machine already have a theory of qualia, with an explanation of why
>qualia and quanta seems different.

If you are saying that the machine may already have it's own qualia,
then sure, I agree, I just don't think it will be our qualia. I think
that our experience of yellow, for example, probably comes through
cellular experiences with photosynthesis and probably has not evolved
much since the Pre-Cambrian. Of course that's a guess. It could be a
mammalian thing or a hominid thing that arises out of the experience
of elaborations throughout the cortex. In order for a silicon chip to
generate that experience of yellow, I think it would have to learn to
speak chlorophyll and hemoglobin.

>I agree. But this is a consequence of comp, and it leads to a
>derivation of physics from computer science/machine's theology. No
>need to introduce any physics (old or new).

It could be that, but the transparency of comp to physical realities
and semantic consistencies are pretty convincing to me. I would rather
think that I am feeling what my fingers are feeling then imagining
that feeling is just a mathematical illusion. Mathematics seem
abstract and yellow seems concrete.

>That's certainly *looks* like the arithmetical plotinian physics.
>Again, you can extract it (or have to extract it for getting the
>correct quanta/qualia) from computer science (actually from just
>addition and multiplication and a small amount of logic).

>I don't really do that. I don't think that consciousness can be
>created or be synthetic. It is not the product of any machine, natural
>or artificial. Such machines only filter consciousness and select
>relative partial realities. My main point is that this is testable. It
>already explains non locality, indeterminacy, non-cloning of matter,
>and some formal aspect of quantum mechanics.

Sorry, not sure what you mean. Probably over my head. What is it that
explains non-cloning of matter? comp? Give me some details and I'll
try to understand.

>That is too vague. It can make sense in the computationalist theory.
>yet the brain itself is a construct of the mind. Not the human mind
>but the relative experience of the many universal numbers/
>computational histories. This follows from the digital mechanist
>hypothesis.

Again, I'm not familiar enough with the theories. It sounds like
you're saying that the brain is made of numbers. Maybe? Not sure it
makes a difference?

Craig

Craig Weinberg

unread,
Jul 10, 2011, 10:44:51 PM7/10/11
to Everything List
>Exactly. So it doesn't depend on the components. Then what does it
>depend on? It depends on their arrangement and interaction.

Yes, but my point is that arrangement and interaction alone don't
matter if the components don't have the capability to support the
desired higher level phenomena.

If you had the blueprint of a watermelon seed and recreated it
precisely out of light bulbs instead of atoms, you could make a
gigantic sculpture of a watermelon seed, but nothing is going to
happen if you plant it in the ground and water it. You could make a
computer program to grow such a blueprint seed into a watermelon, but
it's never going to taste like anything to anyone. It's just a digital
sculpture.

>So what's the other half? Do brains have to be made of special
>conscious atoms?

The other half is the aggregate sensorimotive experience of all matter
over all time. The consciousness of a brain doesn't derive from
special atoms, it's that we are the sensorimotive experience of a
human brain, so the consciousness of human like phenomena seems
special to us, and in our view of the universe, it is special to us.

>What does it mean "sensorimotor way" mean. It sounds like the cognitive
>analog of elan vital.
It extends beyond cognitive. Sensorimotor is just experiential input
(detection, sensation, perception, etc) and output (determinism,
instinct, volition). The three terms in each case are in ascending
order, so that an atom might experience detection and deterministic
force compelling reaction and those two functions may be simultaneous,
whereas the larger aggregates of cells and organs share a collective
experience which is perceptually rich and which spreads out the gap
between sense and motive, or slows it down so that a feeling of choice
and can develop.

>But analog ones are?
No, I'm saying that it's not the circuits which are making the brain
conscious, it's the brain itself which is conscious, and the
circulation of electromagnetic correspondences within the tissue of
the brain are just the shadow of that. You can't build a brain by
superimposing those shadows onto a digital semiconductor array and
expect it to feel like a brain feels.

Craig
The
components at some low level, in this case atoms, are *not* alive.
How
can cognition be any different?

Jason Resch

unread,
Jul 10, 2011, 11:48:33 PM7/10/11
to everyth...@googlegroups.com
On Sun, Jul 10, 2011 at 8:52 PM, Craig Weinberg <whats...@gmail.com> wrote:
>I don't think we can say what is or what wouldn't be possible with a machine of these
>complexity; all machines we have built to date are primitive and simplistic
>by comparison.  The machines we deal with day to day don't usually do novel
>things, exhibit creativity, surprise us, etc. but I think a machine as
>complex as the human brain could do these things regularly.

I do think that we can say, with the same certainty that we cannot
create a square circle, that it would not be possible at any level of
complexity. It's not that they can't create novelty or surprise, it's
that they can't feel or care about their own survival. I'm saying that
the potential for awareness must be built in to matter at the lowest
level or not at all.

I disagree with this.  Do you have an argument to help convince me to change my opinion?
 
Complexity alone cannot cause awareness in
inanimate objects, let alone the kind of rich, ididopathic phenomena
we think of as qualia. The waking state of consciousness requires no
more biochemical complexity to initiate than does unconsciousness.

The complexity of the wiring doesn't change between an unconscious and conscious brain does not change, but the complexity of what is transmitted over that wiring does.  It is like a computer that is turned off, vs. one which has loaded its programs into memory and begun executing them.  There is no change in the wiring (hardware) of the computer, only a software change has occurred.  Similarly, the presence or absence of large-scale firing patterns involving many brain regions makes the difference between consciousness and unconsciousness.  fMRI scans have shown that a stimulus in an anesthetized brain does not travel nearly as far as it would in an unconscious brain.  There is a difference in complexity between a signal that  reaches 10 billion neurons and one that reaches 1 billion.

 
In
this debate, the idea of complexity is a red herring which, together
with probability acts as a veil of what I consider to be the religious
faith of promissory materialism.

> If one day humans succeeded in reverse engineering a brain, and executed it
> on a super computer, and it told you it was conscious and alive, and did not
> want to be turned off, would this convince you or would you believe it was
> only being mimicking something that could feel something?  If not, it seems
> there would be no possible evidence that could convince you.  Is that true?

The only thing that would come close to convincing me that a
virtualized brain was successful in producing human consciousness
would be if a person could live with half of their brain emulated for
a while, then switch to the other half emulated for a while and report
as to whether their memories and experiences of being emulated were
faithful. I certainly would not exchange my own brain for a computer
program based on the computer program's assessment of it's own
consciousness.

Okay.
 

> I believe this is what computers allow us to do: explore alternate universes
> by defining new sets of logical rules.

Sure, but they can also blind us to the aspects of our own universe
which cannot ever be defined by any set of logical rules (such as the
experiential nature of qualia).

> Neural prostheses will be common some day, Thomas Berger has spent the past
> decade reverse engineering the hippocampus:http://www.popsci.com/scitech/article/2007-04/memory-hacker

Prostheses are great but you can't assume that you can replace the
parts of the brain which host the conscious self without replacing the
self.

Let's say there was advanced medical technology in the distant future which could heal a person from any wound, it could reassemble a person atom by atom or cell by cell if necessary.  If you were completely obliterated in some disaster and then perfectly restored with this machine, would it concern you if you learned you had been reconstructed by the medical device's own internal store of matter, rather than use your original atoms?  If it does, then you must somehow justify why the continual replacement of matter in your body through normal metabolism does not alarm you, if it does not, then some part of you admits that what constitues a person is not their matter but the patterns of the matter which define them.  The mechanist idea is that it is patterns above all that matter, and whether they are replicated with brain cells, silicon chips, or ping pong balls, the essence of the entity (its personality, thought patterns, memories, abilities, dreams, etc.) would all be preserved.

This philosophy has already shown great success for anything that stores, transmits or processes information.  Data can be stored as magnetic poles on hard drives and tape, different levels of reflectivity on CDs and DVDs, as charges of electrons in flash memory, etc.  Data can be sent as vibrations in the air, electric fields in wires, photons in glass fibers, or ions between nerve cells.  Data can be processed by electromechanical machines, vacuum tubes, transistors, or biological neural networks.  These different technologies can be meshed together without causing any problem.  You can have packets sent over a copper wire in an Ethernet cable, and then be bridged to a fiber optic connection and represented as groups of photons, and then translated again to vibrations in the air, and then after being received by a cochlea, transmitted as releases of ions between nerve cells.  Data can be copied from the flash memory in a digital camera, to a hard drive in a computer, and then encoded into a persons brain by way of a monitor.  To believe in the impossibility of an artificial brain is to believe there is some form of information which can only be transmitted by neurons, or some computation performed by neurons which cannot be reproduced by any other substrate.  The results of Henry Markram and team have thus far, not found the need to incorporate any unknown physics in order to build biologically accurate models of large sections of interconnected neurons.  If you think his team will ultimately fail you should go beyond the mere prediction that they will fail and provide a reasoning or explanation for what you think the roadblock will be.  Is it infinite complexity, non-computable functions, something else?  Without pointing to something in the brain which cannot be modeled, then logic leads directly to the idea that intelligent machines are possible.  Once at this stage, you may say the machines may be intelligent, but unconscious.  This leads to a belief in philosophical zombies.  Is this what you believe will happen, or am I missing some part of your theory?
 
If you lose an arm or a leg, fine, but if you lose a head and a
body, you're out of luck. To save the arm and replace the head with a
cybernetic one is not the same thing. Even if you get a brain grown
from your own stem cells, it's not going to be you. One identical twin
is not a valid replacement for the other.


I agree, a re-grown brain or a twins brain would not have the same memories.  However, nothing prevents the construction of a more faithful replica, complete with the original neuron links and connection weights.
 
>  If only one possible
> substrate is possible in any given universe, why do you think it just so
> happens to line up with the same materials which serve a biological
> function?  Do you subscribe to anthropic reasoning?

I don't know that only one substrate is possible, and I don't
necessarily think that consciousness is unique to biology, I just
think that human consciousness in particular is an elaboration of
hominid perception, animal sense, and organic molecular detection.

Qualia aren't directly connected to sensory measurements from the environment though.  If I swapped all the red-preferring cones in your eyes with the blue-preferring cones, then shone blue-colored light at your eyes, you would report it as red.
 
The
more you vary from that escalation, the more you should expect the
interiority to diverge from our own. It's not that we cannot build a
brain based on plastic and semiconductors, it's that we should not
assume that such a machine would be aware at all, just as a plastic
flower is not a plant. It looks enough like a plant to fool our casual
visual inspection, but for every other animal, plant, or insect, the
plastic flower is nothing like a plant at all. A plastic brain is the
same thing. It may make for a decent android to serve our needs, but
it's not going to be an actual person.

Do you think something can in principle act just like an intelligent person in every respect, without being at all conscious?
 

> Primary colors aren't physical properties, they are purely mental
> constructions.  There are shrimp which can see something like 16 different
> primary colors.  It is a factor of the dimensionality of the inputs the
> brain has to work with when generating the environment you believe yourself
> to be in.

They are phenomena present in the cosmos, just as a quark or galaxy
is. Labeling them mental constructions is just a way of disqualifying
them by appealing to metaphysical speculation.

This has been understood ever since the ancient Greeks, and by most scientists who have studied light:
The Greek philosopher Democritus, who lived in the 4th and 3rd centuries B.C., best known for his atomic theory of matter, said “By convention there are sweet and bitter, hot and cold, by convention there is color; but in truth there are atoms and the void.”  Galileo wrote in The Assayer, published in 1623, “I think that tastes, odors, colors, and so on are no more than mere names so far as the object in which we locate them are concerned, and that they reside in consciousness.  Hence if the living creature were removed, all these qualities would be wiped away and annihilated.”  Newton, in Opticks: “For the rays, to speak properly, are not colored.  In them there is nothing else than a certain power and disposition to stir up a sensation of this or that color.”  There is a simple proof of this.  The color magenta has no physical existence, it does not correspond to any frequency of light.  It is impossible to build a laser beam (single frequency) which is magenta colored.  Magenta exists in our minds only because our brain artificially connects the two ends of the spectrum into a closed loop, and invents a color between blue and red light.  The edges of the spectrum are, however, just the borders of the range of frequencies we humans can perceive, there is nothing physically special about the range of wavelengths from 380 to 710 nm.

I am not attempting to disqualify qualia, rather I am saying they are formed by the mind, and do not exist as physical properties of this universe.

 
Mentally constructed
where? From what? How?

I believe from relations that are defined by computations.
 
Why can't we mentally construct new colors
ourselves?

We have little control over the number of cone cells we are born with.  (But this may change soon, using gene therapy).  If we had full control to rewire our brain in any way we wanted, we could perceive entirely novel, never before seen colors.
 
Even if you had seen red and blue, you could not in your
wildest imaginings or most rigorous quantitative expression conceive
of what it is to see yellow if you had never seen it. Yellow is not
just a bluer version of red, even though electromagnetically that is
exactly what it should be, it's different from either blue or red and
different in a self-explanatory, exquisitely signifying way.

Someone who had no green-sensitive cones in their eyes might report yellow as a blush red.  Much of this is defined by the opponent process, which explains why we can see bluish greens, greenish yellows, reddish blues, and yellowsh reds, but we cannot see greenish-reds, or bluish-yellows.  The reason is the same channel is used for red-green, blue-yellow, and white-black data.  A higher amount of green light compared to red causes the neurons in the channel to fire more slowly, when red light is equal to green light, the channel fires at a base rate (which it would also do in the absence of any green or red light).  When red light is detected more strongly than green the neurons fire more quickly.  This explains why we cannot perceive greenish red objects, and it explains why we see a new color (yellow) when both are detected.  The blue-yellow channel will begin to tip towards yellow, while the green-red channel remains balanced.

 
Shrimp
may not even see one color, let alone 16. They may just be able to
distinguish different qualities of grey.

I accept that it could, but it would require the shrimp brain to throw away most of the information its eyes are collecting.  It is hard to imagine shrimp would evolve such specialized cone cells to not take every possible advantage of the data they provide to the brain, as it would likely provide a significant survival advantage.  One possibility is the brain does not to any combination of the colors, and just uses the raw data directly.  This would enable the shrimp to see just 16 colors.  We would have to understand what type of processing is done within the shrimp brain to determine how the data are used.
 
You're still not accepting
that color is not mechanical. It has no third party dimensionality. It
is either seen first hand or it does not exist. This is the way most
of the phenomena we experience and certainly the experiences we care
about work.

> some people have reported experiencing "impossible"
> colors, such as reddish green and yellowish blue.

Yeah I like that demo. It's not a new primary color though, that's
just contradictory mixing of familiar colors. I'm not talking about
reddish green, I'm talking about Xed, Yhite, and Zlue. Because if you
are going to assert that the spectrum is a mental construct then there
would need to be some explanation of how many such mental universals
can be constructed.

Nearly an infinite number could be constructed, and they are all accessible within this universe.  (If you accept computationalism).
 
Why not ten million completely and utterly novel
spectrums? How do you make them make sense internally so that you can
have complements and opposites, color wheels and additive vs
subtractive mixing palettes?

It is all up to how the mind relates, compares, and contrasts the data it is presented with.  Remember, the only difference between red and blue in the optic nerve is which neurons are firing.  It is not as if different chemical signaling is used by nerves signaling blue light, than nerves signaling red light.  It is all just colorless dots and dashes traveling down the optic nerve.  There is now "essence of blue" that travels from the image of the sky, through your retina, down the optic nerve, and into your conscious experience.  The brain interprets the dots and dashes, and creates the experience of blue.  How?  It is certainly very complex, and you would do better to start off how the brain creates high or low pitched sounds, or even better, how it creates the sensation of touch.  These are much simpler senses, and fewer steps are made between the reception of the nerve impulse and ones conscious perception of it.
 

>I think they are informational, rather than physical, but I tend to agree it
>may not be communicable without instantiating the same patterns in your own
>mind, or rewiring one's own brain to have the experience of someone else.

I think that informational is metaphysical. It doesn't explain how the
effect is achieved. Imagine that color did not exist and you were
writing a program for a virtual world. How would you invent color?

Game programmers typically do this using 3 numbers which indicate the intensity of red, green, and blue know as "RGB".  For any sense, all that is needed are measurable differences.  How they get turned into different qualia is up to whoever programs the mind.  As the programmer for a virtual world, I just need to provide a way to expose these differences in data to those entities within the environment.
 
How
could you even conceive of the idea for it, it's like 'beef flavored
nineteen'. There's no information there, it's pure experiential sense.
Visual feeling. It is physical but it is the interior of physicality -
not electromagnetic, but the sensorimotive topology of the sense which
can be detected externally as electromagnetism. Color is how
electromagnetism feels to us, to our brains, nerves, retinas.

Color is how nerve impulses from the optive nerve feel to us.  The brain doesn't have access to light (it exists in a pitch dark environment imprisoned in an opaque skull).  Sound is how nerve impulses from the cochlea feel to us.  How do nerve impulses physically, or biochemically differ between the auditory, optic, or olfactory nerves?  There is probably very little difference at all.  The difference comes in how the brain processes these different inputs in different ways.
 
This is
a really big deal to realize. It's a secret door to finding your own
existence in a world of materialistic reflections.

>How does it know to stop at a red light if it is not aware of anything?
It doesn't stop at a red light. The car stops at an electronic signal
that fires when the photosensor and it's associated semiconductors
match certain quantitative thresholds which correspond to what we see
as a red light.

Sounds very much like a description  one could make for why a person stops at a red light.  There are inputs, some processing, and some outputs.  The difference is you think the processing done by a computer is meaningless, while the processing done by a brain is not.
 
It has no idea there is a car or a light. It knows
silicon, boron, germanium, and what electricity feels like.


>The human brain doesn't have a tiny homunculus inside of it watching a
>projector screen of conscious thought, the brain itself is a system which
>provides its own meaning and interpretation.
It's not a tiny homunculus, it's us. Meaning is not provided by the
brain any more than movies are provided by a DVD player.

>Likewise, a word processor
>distinguishes incorrectly spelled words from correctly spelled words whether
>or not someone is looking at or using the word processor.  Surely, the
>meaning to a person of a misspelled word is different from the meaning to
>the word-processor, yet there is still a distinction, and there is internal
>meaning between the status of correctly spelled vs. incorrectly spelled
>which affects the state of the word processor.

The word processor is just semiconductors which are activated and
control in a pattern we deem meaningful. There is no distinction for
the computer between correct or incorrect spelling, other than
different logic gates being held open or closed.

If that is so, then point out where this logic fails: "There is no distinction for a human that is sad or happy, there are dist different collections of neurons either firing or not firing."
 
It's just self
referential machine language and has no sense of linguistic
significance whatsoever. Computation by itself can only simulate
intelligence, it can't know any meaning, just as a recipe can't be
served as a meal.

Okay.  If you agree that intelligence can be simulated, only without meaning.  What would you say if I told you I am a computer emulating the mind of Jason, and your entire conversation with him this far has been with a computer.  (This is possible given that you said intelligence can be emulated).  Would you still assert that this computer you have been conversing with does not possess any meaning?  I, the computer, do not understand the words you write, not the responses I craft?  Am I only a lowly adding machine, processing meaningless symbols in the way my programming tells me to process them?  I might as well call you with your biological brain a lowly machine, blindly following the laws of physics.  Your atoms don't understand a thing, as they bounce around off each other in predictable ways, they are just following the rules, after all.
 

Bruno Marchal

unread,
Jul 11, 2011, 4:08:30 AM7/11/11
to everyth...@googlegroups.com

On 10 Jul 2011, at 17:59, meekerdb wrote:

> On 7/9/2011 9:58 AM, Craig Weinberg wrote:
>> Sure, it would be great to have improved synthetic bodies, but I have
>> no reason to believe that depth and quality of consciousness is
>> independent from substance. If I have an artificial heart, that
>> artificiality may not affect me as much as having an artificial leg,
>> however, an artificial brain means an artificial me, and that's a
>> completely different story. It's like writing a computer program to
>> replace computer users. You might find out that digital circuits are
>> unconscious by definition.
>>
>
> But analog ones are? It is generally thought that any analog
> circuit can be reproduced at any give level of precision by a
> digital circuit.

You can build analog circuit which are not Turing emulable, but it
depends on your theory of computation on the reals, which lacks the
equivalent of Church thesis, so that there is no unanimity of what
this is, and if that exists in nature. I am agnostic.

> Bruno's idea depends on this being true.

Which idea? I just show that comp makes physics necessarily a branch
of math, and precisely a branch of universal machine theology. I am
not saying that comp is true or false. That is the job of philosophers.


> It is questionable though because it may be the case that spacetime
> is truly a continuum: http://www.newscientist.com/article/mg21128204.200-distant-light-hints-at-size-of-spacetime-grains.html
> It's hard to believe though that the continuous nature of spacetime
> would effect the function of brains. However, it would prevent the
> digital simulation of large regions.

Comp explains that physics is not Turing emulable. Indeed, today,
physics seems still too much Turing emulable compared to what we can
extract intuitively from comp. But comp is not refuted by that fact,
because the real extraction of physics must obeys to the self-
referential constraints, which shows the question being highly non
trivial.

Bruno


http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Jul 11, 2011, 4:26:49 AM7/11/11
to everyth...@googlegroups.com
On 11 Jul 2011, at 04:17, Craig Weinberg wrote:

All right, but then honesty should force you to do the same with
computer ships. Unless you presuppose the molecules not being Turing
emulable.

Computer chips don't behave in the same way though.

That is just a question of choice of level of description. Unless you believe in substantial infinite souls.



Your computer
can't become an ammoniaholic or commit suicide.

Why?



The problem with
emulating molecules is that we are only emulating the side of the coin
we can see.

That is true. 



The other side is blank and that's the side that
interiority and awareness is made of. We can add chips to our brain
though, or build a computer out of cells.

The other side is well explained in the comp theory. 





Any mechanical arrangement defining a self-referentially correct
machine automatically leads the mechanical arrangement to distinguish
third person point of view and first person points of view. The
machine already have a theory of qualia, with an explanation of why
qualia and quanta seems different.

If you are saying that the machine may already have it's own qualia,
then sure, I agree, I just don't think it will be our qualia. I think
that our experience of yellow, for example, probably comes through
cellular experiences with photosynthesis and probably has not evolved
much since the Pre-Cambrian. Of course that's a guess. It could be a
mammalian thing or a hominid thing that arises out of the experience
of elaborations throughout the cortex. In order for a silicon chip to
generate that experience of yellow, I think it would have to learn to
speak chlorophyll and hemoglobin.

No problem. That would mean that the substitution level is low. It does no change the conclusion: the physical world is a projection of the mind, and the mind is an inside view of arithmetic (or comp is false, that is, at all level and you need substantial souls). But we don't even find a substance for explaining matter, so that seems a regression to me. Anyway, it is inconsistent with the comp assumption.





I agree. But this is a consequence of comp, and it leads to a
derivation of physics from computer science/machine's theology. No
need to introduce any physics (old or new).

It could be that, but the transparency of comp to physical realities
and semantic consistencies are pretty convincing to me.

It is not.



I would rather
think that I am feeling what my fingers are feeling then imagining
that feeling is just a mathematical illusion. Mathematics seem
abstract and yellow seems concrete.

But computer science explains why and how such feelings occur.




That's certainly *looks* like the arithmetical plotinian physics.
Again, you can extract it (or have to extract it for getting the
correct quanta/qualia) from computer science (actually from just
addition and multiplication and a small amount of logic).

I don't really do that. I don't think that consciousness can be
created or be synthetic. It is not the product of any machine, natural
or artificial. Such machines only filter consciousness and select
relative partial realities. My main point is that this is testable. It
already explains non locality, indeterminacy, non-cloning of matter,
and some formal aspect of quantum mechanics.

Sorry, not sure what you mean. Probably over my head. What is it that
explains non-cloning of matter? comp? Give me some details and I'll
try to understand.


If you get the six or seven first steps, it is an easy exercise to show that matter cannot be cloned. Ask if you have any difficulty.



That is too vague. It can make sense in the computationalist theory.
yet the brain itself is a construct of the mind. Not the human mind
but the relative experience of the many universal numbers/
computational histories. This follows from the digital mechanist
hypothesis.

Again, I'm not familiar enough with the theories. It sounds like
you're saying that the brain is made of numbers. Maybe? Not sure it
makes a difference?

The brain is not made of numbers.

The belief in brains (and atoms) entirely results from infinities of number relations.

Or comp is false.

My point is just that computer science makes this enough precise so that comp can be tested. 

Bruno

Craig Weinberg

unread,
Jul 11, 2011, 8:04:23 AM7/11/11
to Everything List
> That's not true. It's dead precisely because it doesn't have the same
> organization.

No, it's dead because the organization means something specific to the
molecular participants below and the biological community above. If it
were just a matter of organization, then there should be no particular
problem with reviving dead organisms, and we would make no more
distinction between our own life and death and the cold and warm
temperature of an inanimate object. Organization does not explain
subjective entanglement. Desire, terror, rage, hysterical laughter,
etc. Organization, by itself, has no significance.

Evgenii Rudnyi

unread,
Jul 11, 2011, 8:33:04 AM7/11/11
to everyth...@googlegroups.com
On 10.07.2011 17:32 Bruno Marchal said the following:

>
> On 10 Jul 2011, at 15:20, Craig Weinberg wrote:

...

>>
>> Let's take the color yellow for example. If you build a brain out
>> of ideal ping pong balls, or digital molecular emulations, does it
>> perceive yellow from 580nm oscillations of electromagnetism
>> automatically, or does it see yellow when it's own emulated units
>> are vibrating on the functionally proportionate scale to itself?
>> Does the ping pong ball brain see it's own patterns of collisions
>> as yellow or does yellow = electromagnetic ~580nm and nothing else.
>> At what point does the yellow come in? Where did it come from? Were
>> there other options? Can there ever be new colors? From where? What
>> is the minimum mechanical arrangement required to experience
>> yellow?
>
> Any mechanical arrangement defining a self-referentially correct
> machine automatically leads the mechanical arrangement to distinguish
> third person point of view and first person points of view. The
> machine already have a theory of qualia, with an explanation of why
> qualia and quanta seems different.
>

Bruno,

Could you please make a reference to a good text for dummies about that
statement? (But please not in French)

Best wishes,

Evgenii
http://blog.rudnyi.ru

Bruno Marchal

unread,
Jul 11, 2011, 10:01:40 AM7/11/11
to everyth...@googlegroups.com
I am afraid the only text which explains this in simple way is my sane04 paper(*). It is in the second part (the interview of the machine), and it uses Smullyan popular explanation of the logic of self-reference (G) from his "Forever Undecided" popular book.

Popular attempts to explain Gödel's theorem are often incorrect, and the whole matter is very delicate. Philosophers, like Lucas, or physicists, like Penrose, illustrate that it is hard to explain Gödel's result to non logicians. I'm afraid the time has not yet come for popular explanation of machine's theology.

Let me try a short attempt. By Gödel's theorem we know that for any machine, the set of true propositions about the machine is bigger than the set of the propositions provable by the machine. Now, Gödel already knew that a machine can prove that very fact about herself, and so can be "aware" of its own limitations. Such a machine is forced to discover a vast range of true proposition about her that she cannot prove, and such a machine can study the logic to which such propositions are obeying.

Then, it is a technical fact that such logics (of the non provable, yet discoverable propositions) obeys some theories of qualia which have been proposed in the literature (by J.L. Bell, for example). 

So the machine which introspects itself (the mystical machine) is bounded to discover the gap between the provable and truth (the G-G* gap), but also the difference between all the points of view (third person = provable, first person = provable-and-true, observable with probability 1 = provable-and-consistent, "feelable" =  provable-and-consistent-and-true, etc.). 

When the machine studies the logic of those propositions, she rediscovers more or less a picture of reality akin to the mystical rationalists (like Plato, Plotinus, but also Nagasena, and many others).

If you are familiar with the logic G, I might be able to explain more. If not, read Smullyan's book, perhaps.
All this is new material, and, premature popular version can be misleading. Elementary logic is just not yet well enough known.

In fact, the UDA *is* the human-popular version of all this. The AUDA is the proper machine's technical version.

If you read the sane04(*) paper, feel free to ask for any precisions.

Best,

Bruno



Craig Weinberg

unread,
Jul 11, 2011, 10:54:17 AM7/11/11
to Everything List
Maybe I should try to condense this a bit. The primary disagreement we
have is rooted in how we view the relation between feeling, awareness,
qualia, and meaning, calculation, and complexity. I know from having
gone through dozens of these conversations that you are likely to
adhere to your position, which I would characterize as one which
treats subjective qualities as trivial, automatic consequences which
arise unbidden from "from relations that are defined by
computations".

My view is that your position adheres to a very useful and widely held
model the universe, and which is critically important for specialized
tasks of an engineering nature, but that it wildly undervalues the
chasm separating ordinary human experience from neurology. Further I
think that this philosophy is rooted in Enlightenment Era assumptions
which, although spectacularly successful during the 17th-20th
centuries, are no longer fully adequate to explain the realities of
the relation between psyche and cosmos.

What I'm giving you is a model which picks up where your model leaves
off. I'm very familiar with all of the examples you are working with -
color perception, etc. I have thought about all of these issues for
many years, so unless you are presenting something which is from a
source that is truly obscure, you can assume that I already have
considered it.

>I disagree with this. Do you have an argument to help convince me to change
>my opinion?

You have to give me reasons why you disagree with it.

>There is no change in the wiring (hardware) of the computer, only a software
>change has occurred.

Right, that's what I'm saying. From the perspective of the wiring/
hardware/brain, there is no difference between consciousness and
unconsciousness. What you aren't seeing is that the unassailable fact
of our own consciousness is all the evidence that is required to
qualify it as a legitimate, causally efficacious phenomenology in the
cosmos rather than an epiphenomenology which magically appears
whenever it is convenient for physical mechanics. This is what I am
saying must be present as a potential within or through matter from
the beginning or not at all.

The next think you would need to realize is that software is in the
eye of the beholder. Wires don't read books. They don't see colors. A
quintillion wires tangled in knots and electrified don't see colors or
feel pain. They're just wires. I can make a YouTube of myself sitting
still and smiling, and I can do a live video Skype and sit there and
so the same thing and it doesn't mean that the YouTube is conscious
just because someone won't be able to tell the difference.

It's not the computer that creates meaning, it's the person who is
using the computer. Not a cat, not a plant, not another computer, but
a person. If a cat could make a computer, we probably could not use it
either, although we might have a better shot at figuring it out.

>would it concern you if you learned you had been reconstructed by the medical
>device's own internal store of matter, rather than use your original atoms?

No, no, you don't understand who you're talking to. I'm not some bio-
sentimentalist. If I thought that I could be uploaded into a billion
tongued omnipotent robot I would be more than happy to shed this
crappy monkey body. I'm all over that. I want that. I'm just saying
that we're not going to get there by imitating the logic of out higher
cortical functions in silicon. It doesn't work that way. Thought is an
elaboration of emotion, emotion of feeling, feeling of sense, and
sense of detection. Electronically stimulated silicon never gets
beyond detection, so ontologically it's like one big molecule in the
sense that it can make. It can act as a vessel for us to push human
sense patterns through serially as long as you've got a conscious
human receiver, but the conduit itself has no taste for human sense
patterns, it just knows thermodynamic electromotive sense. Human
experience is not that. A YouTube of a person is not a person.

>Color is how nerve impulses from the optive nerve feel to us.
Why doesn't it just feel like a nerve impulse? Why invent a
phenomenology of color out of whole cloth to intervene upon one group
of nerve cells and another? Color doesn't have to exist. It provides
no functional advantage over detection of light wavelengths through a
linear continuum. Your eyes could work just like your gall bladder,
detecting conditions and responding to them without invoking any
holographic layer of gorgeous 3D technicolor perception. One computer
doesn't need to use a keyboard and screen to talk to another, so it
would make absolutely no sense for such a thing to need to exist for
the brain to understand something that way, unless such qualities were
already part of what the brain is made of. It's not nerve impulses we
are feeling, we are nerves and we are the impulses of the nerves.
Impulses are nerve cells feeling, seeing, tasting, choosing. They just
look like nerve cells from the point of view of our body and it's
technological extensions as it is reflected back to us through our own
perception of self-as-other.

>Data can be stored as magnetic poles on
>hard drives and tape, different levels of reflectivity on CDs...

Data is only meaningful when it is interpreted by a sentient organism.
Our consciousness is what makes the pattern a meaningful pattern. Read
a book, put it on tape, CD, flash drive, etc. It means nothing to the
cockroaches and deer foraging for food after the humans are gone.
Again, data is in the eye of the beholder, it is an epiphenomon. We
are not data. We eat data but what we are is the sensorimotor topology
of a living human brain, body, lifetime, civilization, planet, solar
system, galaxy, universe. We have a name, but we are not a name.

>Nearly an infinite number could be constructed, and they are all accessible
>within this universe. (If you accept computationalism).

Constructed out of what? Why can't we just imagine a color zlue if
it's not different than imaging a square sitting on top of a circle?
You're trying to bend reality to fit your assumptions instead of
expanding your framework to accommodate the evidence.

>> >How does it know to stop at a red light if it is not aware of anything?
>> It doesn't stop at a red light. The car stops at an electronic signal
>> that fires when the photosensor and it's associated semiconductors
>> match certain quantitative thresholds which correspond to what we see
>> as a red light.

>Sounds very much like a description one could make for why a person stops
>at a red light. There are inputs, some processing, and some outputs. The
>difference is you think the processing done by a computer is meaningless,
>while the processing done by a brain is not.

You are assuming that the inputs and outputs have any significance
independent of the processing. The processing is everything.

>> The word processor is just semiconductors which are activated and
>> control in a pattern we deem meaningful. There is no distinction for
>> the computer between correct or incorrect spelling, other than
>> different logic gates being held open or closed.

>If that is so, then point out where this logic fails: "There is no
>distinction for a human that is sad or happy, there are dist different
>collections of neurons either firing or not firing."

Right, you can't tell from the outside. If we discovered an alien word
processor in a crashed spaceship then we could not know whether or not
it is made out of something which understands what it's doing or
whether it's just an artifact which reflects the function of it's use
by something else that understands what it's doing. Since we know how
our own word processors are made however, I have no reason to infer
that electrified silicon cares whether a word is spelled correctly.

>Qualia aren't directly connected to sensory measurements from the
>environment though. If I swapped all the red-preferring cones in your eyes
>with the blue-preferring cones, then shone blue-colored light at your eyes,
>you would report it as red.

Right, you don't even need eyes. I can imagine or dream red without
there being anything there for the senses to measure. What it is
directly connected to though is the internally consistent logic of
visual awareness. The universe doesn't pick yellow out of a hat, or if
it did, where is the hat and what else is in it?

> The brain interprets the dots and dashes, and creates the
> experience of blue. How? It is certainly very complex,

It's only complex if you presume that blue is created. It isn't. It's
primary like charge or spin. Blue is the human nervous system feeling
itself visually, just as language is the nervous system feeling itself
semantically. Blue is incredibly simple. It's probably what we have in
common with one celled organisms and their experience of
photosynthesis dating back to the Precambrian Era. Nerve color is cell
color. It takes an elaborate architecture of different kinds of cells
to step that awareness up to something the size and complexity of a
human being, so the cells are sense-augmented and concentrated into
organs which share their experience with the sense-diminished cells of
the cortex.

>Am I only a lowly adding machine, processing meaningless symbols in the way my
> programming tells me to process them?

No, no. There's nothing inherently less-marvelous about an a-
signifying machine of significant complexity compared to something
that can feel and think. I'm just saying that it's not the same thing.
Even an imitation can improve upon the original, but we are looking at
the wrong side of the Mona Lisa to accomplish that if we seek
consciousness from silicon.


On Jul 10, 11:48 pm, Jason Resch <jasonre...@gmail.com> wrote:
> On Sun, Jul 10, 2011 at 8:52 PM, Craig Weinberg <whatsons...@gmail.com>wrote:
>
> > >I don't think we can say what is or what wouldn't be possible with a
> > machine of these

(cut for hugeness)

On Jul 10, 11:48 pm, Jason Resch <jasonre...@gmail.com> wrote:
> concerned, and that they reside in ...
>
> read more »

Jason Resch

unread,
Jul 11, 2011, 11:51:11 AM7/11/11
to everyth...@googlegroups.com
On Mon, Jul 11, 2011 at 9:54 AM, Craig Weinberg <whats...@gmail.com> wrote:
Maybe I should try to condense this a bit. The primary disagreement we
have is rooted in how we view the relation between feeling, awareness,
qualia, and meaning, calculation, and complexity. I know from having
gone through dozens of these conversations that you are likely to
adhere to your position, which I would characterize as one which
treats subjective qualities as trivial,

They are not trivial.  If they were, our brains would not require billions of neurons and quadrillions of connections.
 
automatic consequences which
arise unbidden from "from relations that are defined by
computations".

Yes, as you say below, it is a result of processing.
 

I agree consciousness has effects, and is not an epiphenomenon.
 

The next think you would need to realize is that software is in the
eye of the beholder. Wires don't read books. They don't see colors. A
quintillion wires tangled in knots and electrified don't see colors or
feel pain.

I think they can.
 
They're just wires. I can make a YouTube of myself sitting
still and smiling, and I can do a live video Skype and sit there and
so the same thing and it doesn't mean that the YouTube is conscious
just because someone won't be able to tell the difference.

There is a difference between a recording of a computation or a description of a computation, and the computation itself.
 

It's not the computer that creates meaning, it's the person who is
using the computer. Not a cat, not a plant, not another computer, but
a person. If a cat could make a computer, we probably could not use it
either, although we might have a better shot at figuring it out.

>would it concern you if you learned you had been reconstructed by the medical
>device's own internal store of matter, rather than use your original atoms?

No, no, you don't understand who you're talking to. I'm not some bio-
sentimentalist. If I thought that I could be uploaded into a billion
tongued omnipotent robot I would be more than happy to shed this
crappy monkey body. I'm all over that. I want that. I'm just saying
that we're not going to get there by imitating the logic of out higher
cortical functions in silicon. It doesn't work that way. Thought is an
elaboration of emotion, emotion of feeling, feeling of sense, and
sense of detection. Electronically stimulated silicon never gets
beyond detection, so ontologically it's like one big molecule in the
sense that it can make. It can act as a vessel for us to push human
sense patterns through serially as long as you've got a conscious
human receiver, but the conduit itself has no taste for human sense
patterns, it just knows thermodynamic electromotive sense. Human
experience is not that. A YouTube of a person is not a person.

Right, a youtube video is not a person, but I think silicon, or any appropriate processing system can perceive.
 

>Color is how nerve impulses from the optive nerve feel to us.
Why doesn't it just feel like a nerve impulse? Why invent a
phenomenology of color out of whole cloth to intervene upon one group
of nerve cells and another? Color doesn't have to exist. It provides
no functional advantage over detection of light wavelengths through a
linear continuum. Your eyes could work just like your gall bladder,
detecting conditions and responding to them without invoking any
holographic layer of gorgeous 3D technicolor perception. One computer
doesn't need to use a keyboard and screen to talk to another, so it
would make absolutely no sense for such a thing to need to exist for
the brain to understand something that way, unless such qualities were
already part of what the brain is made of.

If red did not look very different from green, to you would fail to pick out the berries in the bush.
 
It's not nerve impulses we
are feeling, we are nerves and we are the impulses of the nerves.
Impulses are nerve cells feeling, seeing, tasting, choosing. They just
look like nerve cells from the point of view of our body and it's
technological extensions as it is reflected back to us through our own
perception of self-as-other.

>Data can be stored as magnetic poles on
>hard drives and tape, different levels of reflectivity on CDs...

Data is only meaningful when it is interpreted by a sentient organism.

Yes information must be interpreted by a processing system to become meaningful, but it doesn't have to be a biological organism.
 
Our consciousness is what makes the pattern a meaningful pattern. Read
a book, put it on tape, CD, flash drive, etc. It means nothing to the
cockroaches and deer foraging for food after the humans are gone.
Again, data is in the eye of the beholder, it is an epiphenomon. We
are not data. We eat data but what we are is the sensorimotor topology
of a living human brain, body, lifetime, civilization, planet, solar
system, galaxy, universe. We have a name, but we are not a name.

>Nearly an infinite number could be constructed, and they are all accessible
>within this universe.  (If you accept computationalism).

Constructed out of what?

Information and the processing thereof.
 
Why can't we just imagine a color zlue if
it's not different than imaging a square sitting on top of a circle?

Our imagination does not cause the organization of the color processing centers of the brain to rewire themselves.  If we could rewire our brains we could experience new colors.
 
You're trying to bend reality to fit your assumptions instead of
expanding your framework to accommodate the evidence.

You are making assumptions of a direct chemical-to-qualia relation built into the physics of the universe.
 

>> >How does it know to stop at a red light if it is not aware of anything?
>> It doesn't stop at a red light. The car stops at an electronic signal
>> that fires when the photosensor and it's associated semiconductors
>> match certain quantitative thresholds which correspond to what we see
>> as a red light.

>Sounds very much like a description  one could make for why a person stops
>at a red light.  There are inputs, some processing, and some outputs.  The
>difference is you think the processing done by a computer is meaningless,
>while the processing done by a brain is not.

You are assuming that the inputs and outputs have any significance
independent of the processing. The processing is everything.

Exactly, the processing is everything.  That is what I have been trying to say all this time. :-)
 

You never addressed the evidence I gave regarding how magenta is an invented color.

What do you think the world looks like to birds, which are tetrachromats?  Do you think they still see various combinations of reds, greens, blues?  What do you think ultra violet look like to them?  Different brains produce different sensations, and there are far more possible brains than there are types of fundamental physical phenomenon.  You won't find a chemical for cyan, or a particle for turquoise, etc.
 
Blue is the human nervous system feeling
itself visually, just as language is the nervous system feeling itself
semantically. Blue is incredibly simple.
It's probably what we have in
common with one celled organisms and their experience of
photosynthesis dating back to the Precambrian Era. Nerve color is cell
color.

We can perceive millions of different colors, but there are not millions of types of neurotransmitters, nor millions of types of neurons.  How does your theory address this?
 
It takes an elaborate architecture of different kinds of cells
to step that awareness up to something the size and complexity of a
human being, so the cells are sense-augmented and concentrated into
organs which share their experience with the sense-diminished cells of
the cortex.

>Am I only a lowly adding machine, processing meaningless symbols in the way my
> programming tells me to process them?

No, no. There's nothing inherently less-marvelous about an a-
signifying machine of significant complexity compared to something
that can feel and think. I'm just saying that it's not the same thing.

What, aside from their parts, is different about them?
 
Even an imitation can improve upon the original, but we are looking at
the wrong side of the Mona Lisa to accomplish that if we seek
consciousness from silicon.



Jason

meekerdb

unread,
Jul 11, 2011, 1:49:45 PM7/11/11
to everyth...@googlegroups.com
On 7/10/2011 6:20 AM, Craig Weinberg wrote:
>> What in the brain would be not Turing emulable
>>
> Let's take the color yellow for example. If you build a brain out of
> ideal ping pong balls, or digital molecular emulations, does it
> perceive yellow from 580nm oscillations of electromagnetism
> automatically, or does it see yellow when it's own emulated units are
> vibrating on the functionally proportionate scale to itself? Does the
> ping pong ball brain see it's own patterns of collisions as yellow or
> does yellow = electromagnetic ~580nm and nothing else. At what point
> does the yellow come in? Where did it come from? Were there other
> options? Can there ever be new colors? From where? What is the minimum
> mechanical arrangement required to experience yellow?
>
>

When the aforesaid ping pong ball brain can cause the word "yellow" to
be enunciated and/or written on all and only occasions that normal
English speakers do. When it anticipates traffic signal lights turning
red. When it identifies sour fruit.....

Brent

Craig Weinberg

unread,
Jul 11, 2011, 2:29:07 PM7/11/11
to Everything List
>They are not trivial. If they were, our brains would not require billions
>of neurons and quadrillions of connections.

Trivial in the technical sense of not being as real as the objective
mechanics which are associated with them. You are saying that it's
only the high quantity of neurons and connections between them that
makes them real rather than the other way around. To say that
subjective qualities are non-trivial would mean acknowledging that it
is the subjective qualities themselves which are driving cells,
neurons, organisms, and cultures rather than just mechanism. You are
saying that hydrogen is non-trivial but yellow is one of an infinite
number of possible colors. I'm saying that the visible spectrum is as
fundamental and irreducible as the periodic table, even though it may
require a more complex organic arrangement to realize subjectively.

>Yes, as you say below, it is a result of processing.
Processing isn't an independent thing, it's what things do. In the
context of input>processing>output, then processing stands for
everything in between input and output: processing by whatever
phenomenon is the processor.

>> quintillion wires tangled in knots and electrified don't see colors or
>> feel pain.

>I think they can

Based upon what? Can cartoons see feel pain? Why not?

>>it doesn't mean that the YouTube is conscious
>> just because someone won't be able to tell the difference.

>There is a difference between a recording of a computation or a description
>of a computation, and the computation itself.

Yellow is not a computation. Discerning whether something is a
different frequency of luminosity than another is a computation,
correlating that to a sensory experience is a computation, but the
experience itself is not a computation. I can give you coordinates for
a polygon and you can draw it on paper or in your mind but giving you
the wavelength for a shade of X-Ray will not help you see it's color
or create a color. It doesn't matter how complex my formula is. Color
cannot be described quantitatively. It's not a matter of waiting for
technology to get better, it's a matter of understanding the
limitations of the exterior topology of our universe.

>Right, a youtube video is not a person, but I think silicon, or any
>appropriate processing system can perceive.

I think that anything can perceive, whether it's a processing system
or not. Not human perception, but if it's matter, then it has
electromagnetic properties and corresponding sensorimotor coherence.
All matter makes sense. It's just that the sense the brain makes
recapitulates a specific layer cake of organic molecular, cellular
biochemical, somatic zoological, neuro anthropological, and
psychological semiotic protocols which are not separate from what they
physically are. You can't export the canon of microbiological wisdom
into a stone unless you make the stone live as a creature. It's not
third party translatable. If it were, then every rock and tree would
by now have learned to speak Portuguese and cook up a mean linguine
with clams.

>If red did not look very different from green, to you would fail to pick out
>the berries in the bush.

That's a fallacy. First you're reducing red or green to a mechanical
function of visual differentiation. Such a definition of color does
not require conscious experience or vision at all. The bush and the
berries could just look like what they taste like. Why create a
separate perceptual ontology? You're also reverse engineering color to
match the contemporary assumptions of evolutionary biology. We have no
reason to suspect that selection pressure would or could conjure a
color palette out of thin air. A longer beak, yes. Prehensile tail,
sure. You've already got the physical structure, it just gets
exaggerated through heredity. Where is the ancestor of red though?

>Yes information must be interpreted by a processing system to become
>meaningful, but it doesn't have to be a biological organism.

Systems don't interpret information, they just present it in different
ways. It makes no difference to a computer whether a text is stored as
natural language, hexadecimal bytes, or semiconductor states. There is
no signifying coherence on the computer level, it's just an array of
low level phenomena being used to simulate and reflect high level
organic sense. You might be able to build chemo-electronic inorganism
which feels and has meaning, but my sense is that it would end up
being no more controllable than biological entities. What we want out
of a processing system - reliability, obedience, precision, etc, is
precisely what is lost when we want to traffic in meaning beyond
digital certainties.

>> Constructed out of what?

>Information and the processing thereof.

You cannot construct a color out of information, any more than you can
construct dinner out of information. Color is concrete sensory
experience - ineffable, idiopathic, self-revealing. There is no
information there, no recipe, it's an ontological prerequisite of
biological visual sense.

>> Why can't we just imagine a color zlue if
>> it's not different than imaging a square sitting on top of a circle?

>Our imagination does not cause the organization of the color processing
>centers of the brain to rewire themselves. If we could rewire our brains we
>could experience new colors.

If color is purely a mental phenomenon, then why should it require any
rewiring? We can only experience new colors if there are new colors to
experience. Color could just as easily be as finite and specific as
the periodic table and emerge at the subatomic level. We may well be
able to see new colors with gene patches or neurotherapies, but it
doesn't change the fact that those colors too must be either be part
of a larger fixed ontology of possible colors or part of a dynamic
color creation schema. Either way it's metaphysical unless you model
sense as a function of matter.

>You are making assumptions of a direct chemical-to-qualia relation built
>into the physics of the universe.

It's not an assumption, it's an intentional hypothesis.

>You never addressed the evidence I gave regarding how magenta is an invented
>color.

All colors are invented. Just not by us. Magenta, brown, beige, grey,
etc are further evidence that color is not simply visible
electromagnetism - it is the sensorimotor interior of
electromagnetism. It makes sense to us that black should be the
absence of light and white should be the presence of all wavelengths.
That sense runs through both sides of vision - the optical exterior
and the perceptual interior. It doesn't have to be that way. Black
could look like orange and White could like like red-orange and we'd
still be able to tell the difference. Black vs white though makes a
specific visual sense to us. To anything that can see it.


>See this video:
>http://www.closertotruth.com/video-profile/What-is-the-Mind-Body-Prob...

Ugh. Minsky is wrong. Just because there are more steps involved in
perception doesn't bring the mechanism of neural spikes or ion pumping
any closer to the experience of perception. He's using complexity as a
veil. "Your poor little minds just haven't figured it out yet." It's
not complex, it's just looking at the phenomenon from the wrong end.
He doesn't see that perception doesn't have to correlate to the
mechanics of the brain directly, they both can correlate to different
sided of an underlying noumenon. Watch David Chalmers instead. His
insights make much more sense to me: http://www.youtube.com/watch?v=kmZaA_xoJiM

>We can perceive millions of different colors, but there are not millions of
>types of neurotransmitters, nor millions of types of neurons. How does your
>theory address this?

I'm not suggesting a one to one correspondence of neurotransmitters to
colors. I'm saying that the sense of the visual spectrum as we know it
is an innate potential of human neurology at the brain level. It may
arise at a lower level - maybe at the level of photosynthesis or the
level atoms - perhaps at a higher level of astrophysical coherence;
nebula etc. Maybe it's woven into the story of the cosmos itself, in
the fabric of what separates literal fact from metaphorical meaning.

>What, aside from their parts, is different about them?

What's the difference between you reading this and being in a coma?
What if I could offer the chance for you to have a perfect body, which
will not age or die, which will have powerful extensions of physical
ability, but there is one catch. You will never be able to experience
a single moment that is not filled with blinding, shrieking pain. You
will perceive yourself to be terrifyingly ugly and your world will be
filled only with the most revolting odors and noises. You will find
that you are able to eat and reproduce quite successfully, only your
experience of it will be as gagging and writhing in interminable
nausea. All you would have to comfort you in your unending,
pleasureless misery will be the knowledge that to the outside world
you will appear to be a fantastic human being, successful in all
areas, even that thought however, will repulse you and fill you with
bottomless dread.

I'm assuming that you would agree that such a deal would not be worth
it, but can you explain why? Why privilege one set of patterns over
another? That's what consciousness gives us. Sensorimotive
participation. A way to perceive qualitative differences and feel like
we can choose to move toward or away from them. This is the basis of
life as much as ATP or DNA, but an entirely different topology:
forward and back, high/low, right and wrong, pain and pleasure,
presence and absence. See?

On Jul 11, 11:51 am, Jason Resch <jasonre...@gmail.com> wrote:
> ...
>
> read more »

Craig Weinberg

unread,
Jul 11, 2011, 2:35:43 PM7/11/11
to Everything List
But it could do those things without ever experiencing yellow. A
traffic signal could look like the smell of burnt toast and achieve
the exact same functionality.Yellow isn't just some variable used as a
placeholder. It has a specific character than must be seen first hand
to have any understanding of. Without that subjective experience of
what yellow looks like, you're just simulating behaviors of yellow-
sightedness.

meekerdb

unread,
Jul 11, 2011, 5:33:46 PM7/11/11
to everyth...@googlegroups.com
On 7/10/2011 6:52 PM, Craig Weinberg wrote:
> I do think that we can say, with the same certainty that we cannot
> create a square circle, that it would not be possible at any level of
> complexity. It's not that they can't create novelty or surprise, it's
> that they can't feel or care about their own survival. I'm saying that
> the potential for awareness must be built in to matter at the lowest
> level or not at all.

At the lowest level ping pong balls and brains are mde of the same stuff
(quarks, electrons, photons,....). So the potential for awareness is
built in to quarks, electrons, photons, etc. Your position seems
incoherent. You're saying brains are made of special stuff that can be
conscious. But on the other hand you say that if any stuff is special
all stuff must be special (which kind of robs "special" of its usual
meaning). But then you say that even if all stuff is special you can't
make a conscious brain out of just any stuff, you have to make it out of
special stuff. ???

Brent

meekerdb

unread,
Jul 11, 2011, 5:38:24 PM7/11/11
to everyth...@googlegroups.com
On 7/10/2011 6:52 PM, Craig Weinberg wrote:
> Yeah I like that demo. It's not a new primary color though, that's
> just contradictory mixing of familiar colors.

Primary colors aren't even a mental construct. They're a language
choice. Orange is new primary color (according to you), as is cyan and
magenta and brown and white and black. Some languages have dozens of
colors some have only a few. Which are called "primary" is purely a
language convention.

Brent

meekerdb

unread,
Jul 11, 2011, 5:43:27 PM7/11/11
to everyth...@googlegroups.com
On 7/10/2011 8:48 PM, Jason Resch wrote:
> Qualia aren't directly connected to sensory measurements from the
> environment though. If I swapped all the red-preferring cones in your
> eyes with the blue-preferring cones, then shone blue-colored light at
> your eyes, you would report it as red.

For about a week. And then he'd report it as blue. At least that's
what I'd predict based on people wearing glasses that invert everything
or swap right and left.

Brent

meekerdb

unread,
Jul 11, 2011, 5:47:24 PM7/11/11
to everyth...@googlegroups.com
On 7/10/2011 8:48 PM, Jason Resch wrote:
Why can't we mentally construct new colors
ourselves?

We have little control over the number of cone cells we are born with.� (But this may change soon, using gene therapy).� If we had full control to rewire our brain in any way we wanted, we could perceive entirely novel, never before seen colors.

Supposedly people who receive artificial lenses in their eyes can see a little into the ultra-violet part of the spectrum.� I don't suppose this gives them the sensation of a previously unseen color though since the eye doesn't have any cones with specific pigment for UV (at least my mother says she doesn't notice any new colors).

Brent

Craig Weinberg

unread,
Jul 11, 2011, 5:57:22 PM7/11/11
to Everything List
I'm having trouble understanding what you're saying.

>> Computer chips don't behave in the same way though.

>That is just a question of choice of level of description. Unless you
>believe in substantial infinite souls.

Not sure what you mean in either sentence. A plastic flower behaves
differently than a biological plant. A computer chip behaves
differently than a neuron. Why assume that a computer chip can feel
what a living cell can feel?

>> Your computer
>> can't become an ammoniaholic or commit suicide.

>Why?

I'm talking about your actual computer that you are reading this on.
Are you asking me why it can't commit suicide or spontaneously develop
a hankering for ammonia?

>The other side is well explained in the comp theory.

I'm giving it a good try reading your SANE2004 pdf but I think I'm
hovering at around 4% comprehension. If you want me to be able to
consider your hypothesis I think that you will have to radically
simplify it's insights to concrete examples which are not dependent
upon references to anyone else's work, logical/mathematical/or
philosophical notation, teleportation, or Turing anything.

As near as I can tell, it seems like you are looking at the hows and
whys of sensation - how physics and sensation are both logical
relations rather than noumenal existential artifacts and why it might
be necessary. I can't really tell what your answer is though. My focus
is on describing what and who we are in the simplest way. To my mind,
what and who we are cannot be described in purely arithmetic
relations, unless arithmetic relations automatically obscure their
origin and present themselves in all possible universes as color,
sound, taste, feeling, etc.

>No problem. That would mean that the substitution level is low. It
>does no change the conclusion: the physical world is a projection of
>the mind, and the mind is an inside view of arithmetic (or comp is
>false, that is, at all level and you need substantial souls). But we
>don't even find a substance for explaining matter, so that seems a
>regression to me. Anyway, it is inconsistent with the comp assumption.

When you say that the physical world is a projection of the mind, do
you mean that in the sense that it might be possible to stop bullets
directly with our thoughts or in the sense of physicality only seeming
physical because our mind is programmed to read it as such? I would
agree that physicality arises only from the body's own physical
composition and our mind's apprehension of the body's awareness of
itself in relation to it's world, but I wouldn't say that physical
matter is a mental phenomenon. By definition, mental phenomena are
exempt from physical constraints, such as gravity, thermodynamics,
etc.

I don't know about the mind being an inside view of arithmetic. I
would say that arithmetic is only one category of sense and see no
reason to privilege it above aesthetic sense or anthropomorphic sense.
Sense is the elemental level to me. Pattern and pattern detection.
Counting is just another pattern. Not all patterns can be reduced to
something that can be counted. Some things have to be named. Still
others cannot be named or numbered.

>But computer science explains why and how such feelings occur.

Computer science explains why pain exists?

>If you get the six or seven first steps, it is an easy exercise to
>show that matter cannot be cloned. Ask if you have any difficulty.

Unfortunately I can't really get any of the steps.
> Readhttp://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHALAbstract...

Jason Resch

unread,
Jul 11, 2011, 6:09:14 PM7/11/11
to everyth...@googlegroups.com

On Jul 11, 2011, at 4:47 PM, meekerdb <meek...@verizon.net> wrote:

> On 7/10/2011 8:48 PM, Jason Resch wrote:
>>
>> Why can't we mentally construct new colors
>> ourselves?
>>
>> We have little control over the number of cone cells we are born

>> with. (But this may change soon, using gene therapy). If we had

>> full control to rewire our brain in any way we wanted, we could
>> perceive entirely novel, never before seen colors.
>
> Supposedly people who receive artificial lenses in their eyes can

> see a little into the ultra-violet part of the spectrum. I don't

> suppose this gives them the sensation of a previously unseen color
> though since the eye doesn't have any cones with specific pigment
> for UV (at least my mother says she doesn't notice any new colors).
>
> Brent


What I've heard is that those people report uv light as purpleish
white. It is because uv light stimulates all three types of cones,
but affects the short wavelength preferring cone somehat more strongly.

Jason

Craig Weinberg

unread,
Jul 11, 2011, 6:29:27 PM7/11/11
to Everything List
I'm not talking about acutal ping pong balls, I'm talking about ideal
ping pong balls which are not made of any subordinate units. Just
white spheres which serve as placeholders for atoms, digital vectors,
whatever. Just the principle of basic things having only physical
qualities to demonstrate how it doesn't follow that arrangement in and
of itself can cause anything live or feel.

Instead, I propose that real atoms have real properties which we
cannot observe unless they are in a complex arrangement which is
similar enough to our own that we can relate to it as a whole. All
stuff is special, but the quality that makes it special is the ability
to feel more and more special through combining in groups, meta
groups, meta meta groups, etc. Externally, it's expressed over space
as increasingly elaborate nested groupings or inertial frames of
objects and movements governed by electomagnetic relativity, but
internally it's expressed as a coherence of sensorimotive perceptual
frames. Instead of more equaling literally more cells or synapses,
more equals better, greater, richer. Not merely larger, faster,
denser, closer, but more important, more powerful, more satisfying.

To say that something is conscious just means that it 'acts like us'.
The less we can relate to any particular thing, the more we fail to
perceive it as employing awareness. Instead we see it as automatic
'nature', probability, etc. That's just what it looks like from the
outside, out of focus as it were, on different scales and in non-human
contexts. The universe is all one thing but it's a zillion different
private interior universes also depending on what you are, how you
participate in it.

>Primary colors aren't even a mental construct. They're a language
>choice. Orange is new primary color (according to you), as is cyan and
>magenta and brown and white and black. Some languages have dozens of
>colors some have only a few. Which are called "primary" is purely a
>language convention.

I'm not talking about the idea of a primary color as linguistic
distinction, I'm talking about the inability of a color to be reduced
to combinations of other colors. Red, Green, and Blue are the primary
hues of projected light, Red, Yellow, and Blue are the primary hues of
reflected light. Cultures may not distinguish green from blue as far
as referring to it by name, but they can see that green and green plus
cannot be made by combining any other colors if it were demonstrated
to them.

meekerdb

unread,
Jul 11, 2011, 7:12:42 PM7/11/11
to everyth...@googlegroups.com
On 7/11/2011 3:29 PM, Craig Weinberg wrote:
> I'm not talking about the idea of a primary color as linguistic
> distinction, I'm talking about the inability of a color to be reduced
> to combinations of other colors. Red, Green, and Blue are the primary
> hues of projected light, Red, Yellow, and Blue are the primary hues of
> reflected light.

It's not the case that all colors can be reproduced by combinations of a
fixed choice of red, green, and blue. I refer you to pg 818 of Sears
and Zemansky - my freshman physics text. In any case, the fact that one
can approximately match a color with an RGB mixture is a consequence of
the human eye having three pigments in the color receptors. If it had
four, then you'd need another "primary" color.

Brent

Jason Resch

unread,
Jul 11, 2011, 7:13:06 PM7/11/11
to everyth...@googlegroups.com
On Mon, Jul 11, 2011 at 5:29 PM, Craig Weinberg <whats...@gmail.com> wrote:
I'm not talking about the idea of a primary color as linguistic
distinction, I'm talking about the inability of a color to be reduced
to combinations of other colors. Red, Green, and Blue are the primary
hues of projected light, Red, Yellow, and Blue are the primary hues of
reflected light. Cultures may not distinguish green from blue as far
as referring to it by name, but they can see that green and green plus
cannot be made by combining any other colors if it were demonstrated
to them.


Craig,

Do you believe there is something physically special about red green and blue compared to other wavelengths of light?  Do you think other animals that see colors can only see combinations of red, green and blue, regardless of the number of types of color receptive cells are in their retina?

Jason 

Craig Weinberg

unread,
Jul 11, 2011, 8:00:36 PM7/11/11
to Everything List
There are humans who have four pigments in their color receptors but
they do not perceive a fourth primary color.
http://www.klab.caltech.edu/cns186/papers/Jameson01.pdf

They just have increased distinction between the primary colors we
perceive. I take that to mean that they cannot point to anything in
nature as having a bright color that ordinary trichromats have never
seen.

Yeah I don't know the technical descriptions of what constitutes
primacy in hues, but it's not important to what I'm trying to get at.
The important thing is that the range and variety of colors we can see
or imagine is not explainable in purely quantitative or physical
terms, neither is it metaphysical, random, made up, or arbitrary. It
constitutes a visual semantic firmament, similar to the periodic
table. The differences between the color wheel and the periodic table
is that since experiences and feelings are phenomena that are
ontologically perpendicular to their external mechanics, they are not
strictly definable through literal observation and measurement, but
through first hand encounters which address the subject directly in a
more uncertain, figurative way. Colors look different depending on
what colors they are adjacent to, what mood we are in, our gender,
etc. unlike iron and magnesium which remain the same if placed next to
each other.

Jason Resch

unread,
Jul 11, 2011, 8:08:47 PM7/11/11
to everyth...@googlegroups.com
On Mon, Jul 11, 2011 at 1:29 PM, Craig Weinberg <whats...@gmail.com> wrote:
>They are not trivial.  If they were, our brains would not require billions
>of neurons and quadrillions of connections.

Trivial in the technical sense of not being as real as the objective
mechanics which are associated with them. You are saying that it's
only the high quantity of neurons and connections between them that
makes them real rather than the other way around.

Not just their quantity, but the relationships of their connections to each other.
 
To say that
subjective qualities are non-trivial would mean acknowledging that it
is the subjective qualities themselves which are driving cells,
neurons, organisms, and cultures rather than just mechanism. You are
saying that hydrogen is non-trivial but yellow is one of an infinite
number of possible colors. I'm saying that the visible spectrum is as
fundamental and irreducible as the periodic table, even though it may
require a more complex organic arrangement to realize subjectively.

>Yes, as you say below, it is a result of processing.
Processing isn't an independent thing, it's what things do.

This is functionalism, it is what things do that matters, not what they are made of.
 
In the
context of input>processing>output, then processing stands for
everything in between input and output: processing by whatever
phenomenon is the processor.


You are defining the process as everything that happens in the middle, but how much of that everything is relevant to the outcome?  If a neuron releases 278,231,782,956 ions instead of 278,231,782,957 is that going to be relevant to how the mind evolves over time, or what qualia are experienced?  What about neutrinos passing through the head of the person, are those important to the model of the brain?  I think you would find that a lot of the processes going on within a person's head is irrelevant to the production of consciousness.  In an earlier post you mentioned hemoglobin playing a role, but if we could substitute a persons blood with some other oxygen rich solution which was just as capable of supporting the normal metabolism of cells, then why should the brain behave any differently, and if it does not behave differently how could the perception of yellow be said to be different?  The mind experiencing the sensation of yellow isn't going to say or do anything different if its outputs are the same.  The two minds would contain the same information, and thus there is nothing to inform the mind of any difference in perception.
 

>> quintillion wires tangled in knots and electrified don't see colors or
>> feel pain.

>I think they can

Based upon what?

My belief that dualism, and mind-brain identity theory are false, and the success of multiple realizability, functionalism, and computationalism in resolving various paradoxes in the philosophy of mind.
 
Can cartoons see feel pain? Why not?

Cartoons aren't systems that receive and update their state and disposition based upon the reception and processing of that information.


>>it doesn't mean that the YouTube is conscious
>> just because someone won't be able to tell the difference.

>There is a difference between a recording of a computation or a description
>of a computation, and the computation itself.

Yellow is not a computation. Discerning whether something is a
different frequency of luminosity than another is a computation,
correlating that to a sensory experience is a computation, but the
experience itself is not a computation. I can give you coordinates for
a polygon and you can draw it on paper or in your mind but giving you
the wavelength for a shade of X-Ray will not help you see it's color
or create a color. It doesn't matter how complex my formula is. Color
cannot be described quantitatively.

It is more than a one dimensional quantity, I agree.  It is a value of rather high complexity and dimensionality existing in the context of your neural network.  Since your neural network is highly complex, the effects the perception has (what it takes to define it) is likewise highly complex.  I think the primary reason you have come to your conclusions, while I have come to mine, is that you think qualia such as yellow are simple, while I think the opposite is true.  If visual sensations were so simple, why would 30% of your cortex be devoted to its processing?  This is a huge number of neurons, for handling at most maybe a million or so pixels.  How many neurons do you think are needed to sense each "pixel" of yellow?
 
It's not a matter of waiting for
technology to get better, it's a matter of understanding the
limitations of the exterior topology of our universe.

>Right, a youtube video is not a person, but I think silicon, or any
>appropriate processing system can perceive.

I think that anything can perceive, whether it's a processing system
or not. Not human perception, but if it's matter, then it has
electromagnetic properties and corresponding sensorimotor coherence.
All matter makes sense.

So would you say a rock see the yellow of the sun and the blue of the sky?  It just isn't able to tell us that it does?
 
It's just that the sense the brain makes
recapitulates a specific layer cake of organic molecular, cellular
biochemical, somatic zoological, neuro anthropological, and
psychological semiotic protocols which are not separate from what they
physically are. You can't export the canon of microbiological wisdom
into a stone unless you make the stone live as a creature. It's not
third party translatable. If it were, then every rock and tree would
by now have learned to speak Portuguese and cook up a mean linguine
with clams.

>If red did not look very different from green, to you would fail to pick out
>the berries in the bush.

That's a fallacy. First you're reducing red or green to a mechanical
function of visual differentiation.

That is the reason for seeing different colors is it not?  What defines red and green besides the fact that they are perceived differently?
 
Such a definition of color does
not require conscious experience or vision at all. The bush and the
berries could just look like what they taste like. Why create a
separate perceptual ontology?

That would be confusing, I couldn't tell if I were looking at a bush or eating.  I wouldn't know the relative position of the bush in relation to myself or other objects either. 
 
You're also reverse engineering color to
match the contemporary assumptions of evolutionary biology. We have no
reason to suspect that selection pressure would or could conjure a
color palette out of thin air.

We have some reason.  There are species of monkeys where all the females are trichromatic, and all the males are dichromatic.  When the first trichromats evolved, did their brains and senses not conjure up a new palette which never before existed?
 
A longer beak, yes. Prehensile tail,
sure. You've already got the physical structure, it just gets
exaggerated through heredity. Where is the ancestor of red though?

The first being which had both senses capable of distinguishing different frequencies of light, and a brain capable of integrating those differences into the environmental model of that being.  It is likely that this being did not perceive red light in the same way we do, it is even possible you don't perceive red in the same way I do.  For all we know, your brain may be the ancestor of red as you know it.  Two people can taste the same thing, and one person likes it while the other dislikes it, just like two people can read the same book and like it or dislike it.  It depends on the structure of their brains.
 

>Yes information must be interpreted by a processing system to become
>meaningful, but it doesn't have to be a biological organism.

Systems don't interpret information, they just present it in different
ways. It makes no difference to a computer whether a text is stored as
natural language, hexadecimal bytes, or semiconductor states.

If it were just stored in memory passively, it makes no difference, but if the computer attempted to parse or otherwise process the data then the format it is in does become important to the proper processing of that information.
 
There is
no signifying coherence on the computer level, it's just an array of
low level phenomena being used to simulate and reflect high level
organic sense. You might be able to build chemo-electronic inorganism
which feels and has meaning, but my sense is that it would end up
being no more controllable than biological entities. What we want out
of a processing system - reliability, obedience, precision, etc, is
precisely what is lost when we want to traffic in meaning beyond
digital certainties.

>> Constructed out of what?

>Information and the processing thereof.

You cannot construct a color out of information,

What do your qualia do?  They inform you.  Do you have an example of anything that is informative but is not information?
 
any more than you can
construct dinner out of information. Color is concrete sensory
experience - ineffable, idiopathic, self-revealing. There is no
information there, no recipe, it's an ontological prerequisite of
biological visual sense.

>> Why can't we just imagine a color zlue if
>> it's not different than imaging a square sitting on top of a circle?

>Our imagination does not cause the organization of the color processing
>centers of the brain to rewire themselves.  If we could rewire our brains we
>could experience new colors.

If color is purely a mental phenomenon, then why should it require any
rewiring?

Our imagination and thoughts do not give us the ability to completely reprogram ourselves.  If they did, people would corrupt their minds or put them into unusable states by thinking certain thoughts.  Minds would crash in the same way our computers so regularly do.
 
We can only experience new colors if there are new colors to
experience. Color could just as easily be as finite and specific as
the periodic table and emerge at the subatomic level.

You mentioned earlier that light frequency is a linear value.  Why then just three primary colors?
 
We may well be
able to see new colors with gene patches or neurotherapies, but it
doesn't change the fact that those colors too must be either be part
of a larger fixed ontology of possible colors or part of a dynamic
color creation schema. Either way it's metaphysical unless you model
sense as a function of matter.

I would say qualia are a function of minds, which are a function of processing, which is a function of matter. (which Bruno would add is really a function of arithematic)
 

>You are making assumptions of a direct chemical-to-qualia relation built
>into the physics of the universe.

It's not an assumption, it's an intentional hypothesis.

If you think the primary colors are fundamental, then to explain colors such as pink, you must add the concept of information and quantity to the fundamental primary colors.  For example, pink = 2 parts blue, 2 parts red, 1 part green.  So this quantitative information is a necessary component of the experience of pink.  Once you get to this point, you might as well abandon the fundamentalness of the primary colors, they are just markers corresponding to activity of different neurons.
 

>You never addressed the evidence I gave regarding how magenta is an invented
>color.

All colors are invented. Just not by us. Magenta, brown, beige, grey,
etc are further evidence that color is not simply visible
electromagnetism - it is the sensorimotor interior of
electromagnetism. It makes sense to us that black should be the
absence of light and white should be the presence of all wavelengths.
That sense runs through both sides of vision - the optical exterior
and the perceptual interior. It doesn't have to be that way. Black
could look like orange and White could like like red-orange and we'd
still be able to tell the difference. Black vs white though makes a
specific visual sense to us. To anything that can see it.


>See this video:
>http://www.closertotruth.com/video-profile/What-is-the-Mind-Body-Prob...

Ugh. Minsky is wrong. Just because there are more steps involved in
perception doesn't bring the mechanism of neural spikes or ion pumping
any closer to the experience of perception. He's using complexity as a
veil. "Your poor little minds just haven't figured it out yet." It's
not complex, it's just looking at the phenomenon from the wrong end.
He doesn't see that perception doesn't have to correlate to the
mechanics of the brain directly, they both can correlate to different
sided of an underlying noumenon. Watch David Chalmers instead. His
insights make much more sense to me: http://www.youtube.com/watch?v=kmZaA_xoJiM


I will watch the video soon.
 
>We can perceive millions of different colors, but there are not millions of
>types of neurotransmitters, nor millions of types of neurons.  How does your
>theory address this?

I'm not suggesting a one to one correspondence of neurotransmitters to
colors. I'm saying that the sense of the visual spectrum as we know it
is an innate potential of human neurology at the brain level. It may
arise at a lower level - maybe at the level of photosynthesis or the
level atoms - perhaps at a higher level of astrophysical coherence;
nebula etc. Maybe it's woven into the story of the cosmos itself, in
the fabric of what separates literal fact from metaphorical meaning.

>What, aside from their parts, is different about them?

What's the difference between you reading this and being in a coma?

That's the wrong question to ask, when a person in a coma is functionally very different from a person who is conscious.  This functional difference is absent between a Turing emulation of a brain and a brain.  They would both be similarly capable.
 
What if I could offer the chance for you to have a perfect body, which
will not age or die, which will have powerful extensions of physical
ability, but there is one catch. You will never be able to experience
a single moment that is not filled with blinding, shrieking pain. You
will perceive yourself to be terrifyingly ugly and your world will be
filled only with the most revolting odors and noises. You will find
that you are able to eat and reproduce quite successfully, only your
experience of it will be as gagging and writhing in interminable
nausea. All you would have to comfort you in your unending,
pleasureless misery will be the knowledge that to the outside world
you will appear to be a fantastic human being, successful in all
areas, even that thought however, will repulse you and fill you with
bottomless dread.

I'm assuming that you would agree that such a deal would not be worth
it, but can you explain why? Why privilege one set of patterns over
another? That's what consciousness gives us. Sensorimotive
participation. A way to perceive qualitative differences and feel like
we can choose to move toward or away from them. This is the basis of
life as much as ATP or DNA, but an entirely different topology:
forward and back, high/low, right and wrong, pain and pleasure,
presence and absence. See?

Were those smart sweepers not "sensorimotive" in their attraction toward the food particles?  Did you try running that simulation?

Jason

Craig Weinberg

unread,
Jul 11, 2011, 8:29:49 PM7/11/11
to Everything List

On Jul 11, 7:13 pm, Jason Resch <jasonre...@gmail.com> wrote:

> Craig,
>
> Do you believe there is something physically special about red green and
> blue compared to other wavelengths of light?  Do you think other animals
> that see colors can only see combinations of red, green and blue, regardless
> of the number of types of color receptive cells are in their retina?
>
> Jason

No, electromagnetic wavelengths do not define colors. Wavelengths just
correspond to cellular sensitivities of cells in the retina, but not
necessarily the brain. The visual cortex is not displaying an
illuminated image inside of the brain's tissue.

I don't know what other animals see. What about insects or plants?
Chlorophyll responds to visible light...perhaps color reception is the
subjective purpose of chloroplasts.

meekerdb

unread,
Jul 11, 2011, 8:45:48 PM7/11/11
to everyth...@googlegroups.com
On 7/11/2011 11:35 AM, Craig Weinberg wrote:
> But it could do those things without ever experiencing yellow.

So you say. But it's just an unsupported assertion on your part. If
the ping-pong intelligence could do those things without experiencing
yellow then maybe you could too. I would I know?

Brent

meekerdb

unread,
Jul 11, 2011, 8:56:24 PM7/11/11
to everyth...@googlegroups.com
On 7/11/2011 5:00 PM, Craig Weinberg wrote:
> There are humans who have four pigments in their color receptors but
> they do not perceive a fourth primary color.
> http://www.klab.caltech.edu/cns186/papers/Jameson01.pdf
>
> They just have increased distinction between the primary colors we
> perceive. I take that to mean that they cannot point to anything in
> nature as having a bright color that ordinary trichromats have never
> seen.
>

How would you know if they did? The only evidence would be if they
could consistently distinguish the colors of two objects that looked
perfectly identical to other people; just as red-green color blind
people can't tell the difference between green and ripe strawberries.
From the color-blind persons perspective that's just increased
distinction between colors he sees.

Brent

> Yeah I don't know the technical descriptions of what constitutes
> primacy in hues, but it's not important to what I'm trying to get at.
> The important thing is that the range and variety of colors we can see
> or imagine is not explainable in purely quantitative or physical
> terms, neither is it metaphysical, random, made up, or arbitrary. It
> constitutes a visual semantic firmament, similar to the periodic
> table. The differences between the color wheel and the periodic table
> is that since experiences and feelings are phenomena that are
> ontologically perpendicular to their external mechanics, they are not
> strictly definable through literal observation and measurement, but
> through first hand encounters which address the subject directly in a
> more uncertain, figurative way. Colors look different depending on
> what colors they are adjacent to, what mood we are in, our gender,
> etc. unlike iron and magnesium which remain the same if placed next to
> each other.
>

You're just asserting that perception is mysterious. Just because we
don't have an explanation for something doesn't mean that an explanation
is in principle impossible. If you given terms like "yellow" an
operational definition then you can test those ideas. As it is, you
*define* them to be "first hand encounters". Then you've already
defined them as impossible to replicate - even by other human beings.

Brent

Craig Weinberg

unread,
Jul 11, 2011, 10:08:19 PM7/11/11
to Everything List

On Jul 11, 8:08 pm, Jason Resch <jasonre...@gmail.com> wrote:
> On Mon, Jul 11, 2011 at 1:29 PM, Craig Weinberg <whatsons...@gmail.com>wrote:

> Not just their quantity, but the relationships of their connections to each
> other.

Ok, but you are still privileging the exterior appearances of neurons
over the interior. You are saying that experience is a function of
neurology rather than neurology being the container for experience.
I'm saying it's both, and causality flows in both directions.

>
> This is functionalism, it is what things do that matters, not what they are
> made of.

Not what things do, but what they are able to do (and detect/sense/
feel/know) based upon what they are.

> I think you would find that
> a lot of the processes going on within a person's head is irrelevant to the
> production of consciousness.

What we get as waking consciousness is an aggressively pared down
extraction of the total awareness of the brain and nervous system, not
to mention the body. There are other forms of awareness being hosted
in our heads besides the ones we are familiar with.

In an earlier post you mentioned hemoglobin
> playing a role, but if we could substitute a persons blood with some other
> oxygen rich solution which was just as capable of supporting the normal
> metabolism of cells, then why should the brain behave any differently, and
> if it does not behave differently how could the perception of yellow be said
> to be different?

It's a matter of degree. As Bruno says 'substitution level'. Synthetic
blood is still organic chemistry, it's not a cobalt alloy. Your still
hanging on to the idea that what you think the nervous system is doing
is what denotes consciousness. I'm saying that it is the nervous
system itself which is conscious, not the logic of the 'signals' that
seem to be passing through it.

> >> quintillion wires tangled in knots and electrified don't see colors or
> >> feel pain.

> >I think they can

> > Based upon what?
>
> My belief that dualism, and mind-brain identity theory are false, and the
> success of multiple realizability, functionalism, and computationalism in
> resolving various paradoxes in the philosophy of mind.

Can wires time travel, become invisible or omnipotent also, or just
perceive color?

>
> > Can cartoons see feel pain? Why not?
>
> Cartoons aren't systems that receive and update their state and disposition
> based upon the reception and processing of that information.

Sure they are. Cartoons receive their shape based upon the changing
positions of colored lines and points.

> If visual sensations were so simple, why would
> 30% of your cortex be devoted to its processing? This is a huge number of
> neurons, for handling at most maybe a million or so pixels. How many
> neurons do you think are needed to sense each "pixel" of yellow?

Your computer is 100% devoted to processing digital information, yet
the basic binary unit could not be simpler. Yellow is the same. It
doesn't break or malfunction. Yellow doesn't ever change into a never
before seen color. It's almost as simple as 'square' or a circle. I
agree that the depth of the significance we feel from color and the
subtlety with which we can distinguish hues is enhanced by the
hypertrophied visual cortex. With all of those neurons, why not a
spectrum of a thousand colors, each as different and unique as blue is
to yellow?

I don't think neurons are needed to sense yellow, they are just
necessary for US to see yellow. I think cone cells probably see it,
protozoa, maybe algae sees it.

> So would you say a rock see the yellow of the sun and the blue of the sky?
> It just isn't able to tell us that it does?

No, I would say that inorganic matter maybe feels heat and
acceleration. Collision. Change in physical state. Just a guess.

> That is the reason for seeing different colors is it not? What defines red
> and green besides the fact that they are perceived differently?

What defines them is their idiosyncratic, consistent visual quality.
Red is also different from sour, does that mean sour is a color? You
don't need color to tell berries from bush. It could be accomplished
directly without any sensory mediation whatsoever, just as your
stomach can tell the difference between food and dirt. (Not that the
stomach cells don't have their own awareness of their world, they
might, just not one that requires us to be conscious of it)

> That would be confusing, I couldn't tell if I were looking at a bush or
> eating. I wouldn't know the relative position of the bush in relation to
> myself or other objects either.

You're trying to justify the existence of vision in hindsight rather
than explaining the possibility of vision in the first place. Again,
omnipotence would be really convenient for me, it doesn't mean that my
body can magically invent it out of whole cloth.

> We have some reason. There are species of monkeys where all the females are
> trichromatic, and all the males are dichromatic. When the first trichromats
> evolved, did their brains and senses not conjure up a new palette which
> never before existed?

I can't know that, but I suspect that there is only one visible schema
experienced by living things on this planet with different levels of
discrimination. That is exactly the case with tetrachromat humans,
they don't see a pure color that is invisible to everyone else, they
just make finer distinctions between our trichromat colors. Possibly
life forms evolved in different solar systems would have a different
palette altogether if the star(s) are significantly different than our
sun.

> > A longer beak, yes. Prehensile tail,
> > sure. You've already got the physical structure, it just gets
> > exaggerated through heredity. Where is the ancestor of red though?
>
> The first being which had both senses capable of distinguishing different
> frequencies of light, and a brain capable of integrating those differences
> into the environmental model of that being. It is likely that this being
> did not perceive red light in the same way we do, it is even possible you
> don't perceive red in the same way I do. For all we know, your brain may be
> the ancestor of red as you know it. Two people can taste the same thing,
> and one person likes it while the other dislikes it, just like two people
> can read the same book and like it or dislike it. It depends on the
> structure of their brains.

Meh, that's just an appeal to uncertainty. It doesn't explain what red
was before it was red nor why the fact that it cannot be conceived
doesn't make it different from something physical like a beak for
which an ancestral form can easily be imagined.

> If it were just stored in memory passively, it makes no difference, but if
> the computer attempted to parse or otherwise process the data then the
> format it is in does become important to the proper processing of that
> information.

If that were true, then unplugging your monitor would change the
content of the internet. Regardless of the form a computer presents
it's data to us in, it is processed the same way to itself, machine
language, bytes.

> What do your qualia do? They inform you. Do you have an example of
> anything that is informative but is not information?

There is no physical change in an object to indicate whether or not it
is meaningful to someone or not. Information is not a thing, it is a
part of speech. Yellow has different meanings to different people in
different contexts, yet it is yellow regardless. It informs but it is
not information. It is concrete visual experience of a living organic
being. Information is what yellow might represent to you or to a
social group or culture.

> You mentioned earlier that light frequency is a linear value. Why then just
> three primary colors?

Don't know. That's more of a cosmological question. The ontology of
awareness is not only mysterious, it is mystery itself.

> I would say qualia are a function of minds, which are a function of
> processing, which is a function of matter. (which Bruno would add is really
> a function of arithematic)

I could go along with that, except that I would say that mind is the
processing of matter elaborated to an organic-somatic-neurological
degree. It could be arithmetic beneath all that, but I would say
beneath arithmetic is sense.

> If you think the primary colors are fundamental, then to explain colors such
> as pink, you must add the concept of information and quantity to the
> fundamental primary colors. For example, pink = 2 parts blue, 2 parts red,
> 1 part green. So this quantitative information is a necessary component of
> the experience of pink. Once you get to this point, you might as well
> abandon the fundamentalness of the primary colors, they are just markers
> corresponding to activity of different neurons.

I don't think that primary colors are fundamental, just irreducible. I
only bring them up to distinguish in my examples between new colors
that would be profoundly different from anything we have every
conceived and colors which are trivially different such as those that
tetrachromats can see. I'm not tied to the primacy of our colors, they
are just like prime numbers, not divisible by other colors in our
system.


> Were those smart sweepers not "sensorimotive" in their attraction toward the
> food particles? Did you try running that simulation?

No I didn't run the simulation but I think I get the idea. Cellular
automata, John Conway's Game of Life, etc. No, the smart sweepers have
no sense of their environment or feel any motive in pursuing their
targets. It's the same thing as painting a face on a volleyball and
using it as puppet. The puppet isn't conscious. The simulation doesn't
know it's a simulation, it just knows about executing microprocessor
instructions over and over.

On Jul 11, 8:08 pm, Jason Resch <jasonre...@gmail.com> wrote:
> On Mon, Jul 11, 2011 at 1:29 PM, Craig Weinberg <whatsons...@gmail.com>wrote:

Craig Weinberg

unread,
Jul 11, 2011, 10:26:21 PM7/11/11
to Everything List
>So you say. But it's just an unsupported assertion on your part. If
>the ping-pong intelligence could do those things without experiencing
>yellow then maybe you could too. I would I know?

Not sure what you mean. Are you saying I should question my own
experience of seeing yellow in order to give a ping pong ball the
benefit of the doubt?

>How would you know if they did? The only evidence would be if they
>could consistently distinguish the colors of two objects that looked
>perfectly identical to other people; just as red-green color blind
>people can't tell the difference between green and ripe strawberries.
>From the color-blind persons perspective that's just increased
>distinction between colors he sees.

They can distinguish the colors of objects that trichromats cannot
(like they can see that someone's shirt doesn't really match their
pants when everyone else thinks it does). Likewise, color blind people
may enjoy the full complement of visual beauty that trichromats see,
just not in as many distinct tones. Maybe they see in classical piano
rather than rock band, but it doesn't mean they are missing out on
music.

>You're just asserting that perception is mysterious. Just because we
>don't have an explanation for something doesn't mean that an explanation
>is in principle impossible. If you given terms like "yellow" an
>operational definition then you can test those ideas. As it is, you
>*define* them to be "first hand encounters". Then you've already
>defined them as impossible to replicate - even by other human beings.

You're just asserting that perception cannot be mysterious. Just
because mysteries have been explained by challenging subjective
assumptions doesn't mean that all subjective phenomena are in
principle explainable in a-signifying, objective terms. If you define
them as something other than first hand encounters, then you
disqualify the only evidence that we can possibly have as human
beings. Besides, I'm not saying that perception must be mysterious,
just that it is a feature of the interior topology of the cosmos
rather than the external side, and therefore has dramatically
different characteristics. Subjects are not objects. Subjects
perceive, objects are perceived. Color is not an object.

Stephen P. King

unread,
Jul 12, 2011, 12:22:18 AM7/12/11
to everyth...@googlegroups.com
Hi Craig!

    Forgive me but could you elaborate on....


On 7/11/2011 10:08 PM, Craig Weinberg wrote:
On Jul 11, 8:08 pm, Jason Resch <jasonre...@gmail.com> wrote:
On Mon, Jul 11, 2011 at 1:29 PM, Craig Weinberg <whatsons...@gmail.com>wrote:

      
Not just their quantity, but the relationships of their connections to each
other.
Ok, but you are still privileging the exterior appearances of neurons
over the interior. You are saying that experience is a function of
neurology rather than neurology being the container for experience.
I'm saying it's both, and causality flows in both directions.
[SPK]
    How does this "causality flows in both directions " work? I have a model of something that has that kind of feature, but I am curious about yours.


      
This is functionalism, it is what things do that matters, not what they are
made of.
Not what things do, but what they are able to do (and detect/sense/
feel/know) based upon what they are.
[SPK]
    How, exactly, are you defining identity as implicit in your question here? To say that X is X, as in the phrase "...what they are ...", is to assume that you known what X is exactly, no? Is this public or private information?


I think you would find that
a lot of the processes going on within a person's head is irrelevant to the
production of consciousness.
What we get as waking consciousness is an aggressively pared down
extraction of the total awareness of the brain and nervous system, not
to mention the body. There are other forms of awareness being hosted
in our heads besides the ones we are familiar with.
{SPK]
    Are you taking into account, for example, decoherence? Are you assuming a classical or quantum world?


 In an earlier post you mentioned hemoglobin
playing a role, but if we could substitute a persons blood with some other
oxygen rich solution which was just as capable of supporting the normal
metabolism of cells, then why should the brain behave any differently, and
if it does not behave differently how could the perception of yellow be said
to be different?
It's a matter of degree. As Bruno says 'substitution level'. Synthetic
blood is still organic chemistry, it's not a cobalt alloy. Your still
hanging on to the idea that what you think the nervous system is doing
is what denotes consciousness. I'm saying that it is the nervous
system itself which is conscious, not the logic of the 'signals' that
seem to be passing through it.

[SPK]
    What difference in kind is there between a component that is equivalent in function *and* is integrable with the system to be substituted? To say that it is made of cobalt alloy would be merely an argument from illicit substitution of identicals!



      
quintillion wires tangled in knots and electrified don't see colors or
feel pain.

      
I think they can

      
Based upon what?
My belief that dualism, and mind-brain identity theory are false, and the
success of multiple realizability, functionalism, and computationalism in
resolving various paradoxes in the philosophy of mind.
Can wires time travel, become invisible or omnipotent also, or just
perceive color?

[SPK]
    How is the specification of wires relevant to the claim? But, Jason, which dualism are you regretting and why? There are more than one!




Can cartoons see feel pain? Why not?
Cartoons aren't systems that receive and update their state and disposition
based upon the reception and processing of that information.
Sure they are. Cartoons receive their shape based upon the changing
positions of colored lines and points.

[SPK]
    Umm, are you not implicitly assuming cartoons in the process of generation where the constructors of the cartoons have, as available information, the changing positions of colored lines and points?




If visual sensations were so simple, why would
30% of your cortex be devoted to its processing?  This is a huge number of
neurons, for handling at most maybe a million or so pixels.  How many
neurons do you think are needed to sense each "pixel" of yellow?
Your computer is 100% devoted to processing digital information, yet
the basic binary unit could not be simpler. Yellow is the same. It
doesn't break or malfunction. Yellow doesn't ever change into a never
before seen color. It's almost as simple as 'square' or a circle. I
agree that the depth of the significance we feel from color and the
subtlety with which we can distinguish hues is enhanced by the
hypertrophied visual cortex. With all of those neurons, why not a
spectrum of a thousand colors, each as different and unique as blue is
to yellow?

I don't think neurons are needed to sense yellow, they are just
necessary for US to see yellow. I think cone cells probably see it,
protozoa, maybe algae sees it.
[SPK]
    From whence obtains meaning? Is the yellow an illusion or some phantom to bewitch the mind? How do you know what yellow is like from the first person aspect of an algae? I don't think that they do not, but exactly how could they, in your opinion?




      
So would you say a rock see the yellow of the sun and the blue of the sky?
 It just isn't able to tell us that it does?
No, I would say that inorganic matter maybe feels heat and
acceleration. Collision. Change in physical state. Just a guess.

[SPK]
    How could you know if you could not act that question to the rock? So the question become whether or not communication is possible with you and a rock. Where the specifics of a language and the attributions of meanings to the objects of experience the result of a computation? If not, what determined them? If they are not determined then how are they different from noise?



That is the reason for seeing different colors is it not?  What defines red
and green besides the fact that they are perceived differently?
What defines them is their idiosyncratic, consistent visual quality.
[SPK]
    "Idiosyncratic" http://www.thefreedictionary.com/idiosyncratic " A structural or behavioral characteristic peculiar to an individual or group." What determined or selected that group from the universe of all similar entities? If that selection process is not equivalent to some computational process then it is indistinguishable from noise, aka, some random process! (Please do not involve some form of a "god in the gap" argument!)


Red is also different from sour, does that mean sour is a color? You
don't need color to tell berries from bush. It could be accomplished
directly without any sensory mediation whatsoever, just as your
stomach can tell the difference between food and dirt. (Not that the
stomach cells don't have their own awareness of their world, they
might, just not one that requires us to be conscious of it)

That would be confusing, I couldn't tell if I were looking at a bush or
eating.  I wouldn't know the relative position of the bush in relation to
myself or other objects either.
You're trying to justify the existence of vision in hindsight rather
than explaining the possibility of vision in the first place. Again,
omnipotence would be really convenient for me, it doesn't mean that my
body can magically invent it out of whole cloth.

[SPK]
    As in "I think therefore I was!" as opposed to the a priori "I think therefore I am"? Omnipotence would not solve the problem of computation here! Not only would you need infinite physical resources, but you would also need infinite time to perform the computation, or else you have to admit a random process caused it to be the only case of colors that you experience!!! The dichotomy is not false!


      
We have some reason.  There are species of monkeys where all the females are
trichromatic, and all the males are dichromatic.  When the first trichromats
evolved, did their brains and senses not conjure up a new palette which
never before existed?
I can't know that, but I suspect that there is only one visible schema
experienced by living things on this planet with different levels of
discrimination. That is exactly the case with tetrachromat humans,
they don't see a pure color that is invisible to everyone else, they
just make finer distinctions between our trichromat colors. Possibly
life forms evolved in different solar systems would have a different
palette altogether if the star(s) are significantly different than our
sun.

[SPK]
    What is your source of that information? To "suspect that "..." is to bet that "..." is true. How different is that from what Bruno is talking about with the "Yes, Doctor"? You seem to be using Bruno's definition of Theaetetian conception of knowledge without even acknowledging it! What is holding you back?




      
A longer beak, yes. Prehensile tail,
sure. You've already got the physical structure, it just gets
exaggerated through heredity. Where is the ancestor of red though?
The first being which had both senses capable of distinguishing different
frequencies of light, and a brain capable of integrating those differences
into the environmental model of that being.  It is likely that this being
did not perceive red light in the same way we do, it is even possible you
don't perceive red in the same way I do.  For all we know, your brain may be
the ancestor of red as you know it.  Two people can taste the same thing,
and one person likes it while the other dislikes it, just like two people
can read the same book and like it or dislike it.  It depends on the
structure of their brains.
Meh, that's just an appeal to uncertainty. It doesn't explain what red
was before it was red nor why the fact that it cannot be conceived
doesn't make it different from something physical like a beak for
which an ancestral form can easily be imagined.

[SPK]
    Seriously, Craig, you are asking for too much! A lack of an explanation that you can understand is not evidence of falsehood! How do you know that you understand the idea? At best you can bet that you are correct; you can not be certain. Yes, you can have certainty that X is X and that it cannot contradict its own existence, but what can this tell you of the properties of X? Knowledge of the "truth values" of questions about the properties of X implies that you can process the meaning of X is {a, b, c, ...} statements. How exactly do you "process meanings"? You use your brain. If that brain is hardwired from DNA to process some range of frequencies as "red" then guess what, u will see red when some EMF excitation stimulated some rod or cone in the retina of your eye...
    All of this physical process involves work that generates entropy. So there is a physical aspect to this.



      
If it were just stored in memory passively, it makes no difference, but if
the computer attempted to parse or otherwise process the data then the
format it is in does become important to the proper processing of that
information.
If that were true, then unplugging your monitor would change the
content of the internet. Regardless of the form a computer presents
it's data to us in, it is processed the same way to itself, machine
language, bytes.

[SPK]
    Non-sequitur.


What do your qualia do?  They inform you.  Do you have an example of
anything that is informative but is not information?
There is no physical change in an object to indicate whether or not it
is meaningful to someone or not. Information is not a thing, it is a
part of speech. Yellow has different meanings to different people in
different contexts, yet it is yellow regardless. It informs but it is
not information. It is concrete visual experience of a living organic
being. Information is what yellow might represent to you or to a
social group or culture.

[SPK]
    Why are you demanding a change in the object to coincide with an act that makes meaningful that that object, for example, is red? The change is in the mind of the observer, but that observer is not some kind of entity that floats disembodied from the world of stuff which in which that object interacts! If you observe a thing, you interact with it, by definition! So do not claim that "there is no physical change in the object"! You are arguing for the a priori asignation of properties to objects in themselves. Why? How is this different in kind from claiming a priori synthetics?



      
You mentioned earlier that light frequency is a linear value.  Why then just
three primary colors?
Don't know. That's more of a cosmological question. The ontology of
awareness is not only mysterious, it is mystery itself.

    {SPK]
    obscurum per obscurius?


I would say qualia are a function of minds, which are a function of
processing, which is a function of matter. (which Bruno would add is really
a function of arithematic)
I could go along with that, except that I would say that mind is the
processing of matter elaborated to an organic-somatic-neurological
degree. It could be arithmetic beneath all that, but I would say
beneath arithmetic is sense.
[SPK]
    I agree, but we need to show necessitation of the "organic-somatic-neurological". That is just 'level of substitution" specifications!  And what exactly defined "sense" as in "beneath arithmetic is sense"? Whose "sense"? Are you claiming that Consciousness is prior to Existence?



      
If you think the primary colors are fundamental, then to explain colors such
as pink, you must add the concept of information and quantity to the
fundamental primary colors.  For example, pink = 2 parts blue, 2 parts red,
1 part green.  So this quantitative information is a necessary component of
the experience of pink.  Once you get to this point, you might as well
abandon the fundamentalness of the primary colors, they are just markers
corresponding to activity of different neurons.
I don't think that primary colors are fundamental, just irreducible. I
only bring them up to distinguish in my examples between new colors
that would be profoundly different from anything we have every
conceived and colors which are trivially different such as those that
tetrachromats can see. I'm not tied to the primacy of our colors, they
are just like prime numbers, not divisible by other colors in our
system.
[SPK]
    What is the difference between fundamental and irreducible? The example you are giving about colors, in that they are like prime numbers, is contingent on the metric that the other colors in our spectrum define! T o say that X is relatively prime to Y is not a proof that X is not divisible by Z. You are not showing a difference in kind that would specify a categorical difference.



      
Were those smart sweepers not "sensorimotive" in their attraction toward the
food particles?  Did you try running that simulation?
No I didn't run the simulation but I think I get the idea. Cellular
automata, John Conway's Game of Life, etc. No, the smart sweepers have
no sense of their environment or feel any motive in pursuing their
targets. It's the same thing as painting a face on a volleyball and
using it as puppet. The puppet isn't conscious. The simulation doesn't
know it's a simulation, it just knows about executing microprocessor
instructions over and over.

[SPK]
    Why is that? Because the Romba does not have a subroutine in its program that generates a model of the room and a model of a Romba in it that is updated and corrected by information that the physical Romba system would acquire via sonar whatever subsystems.
    Check out these robots! http://www.youtube.com/watch?v=ehno85yI-sA

Onward!

Stephen

Evgenii Rudnyi

unread,
Jul 12, 2011, 3:34:33 AM7/12/11
to everyth...@googlegroups.com
Bruno,

Why don't you make a course for dummies about this? (For example in
Second Life)

Evgenii


On 11.07.2011 16:01 Bruno Marchal said the following:

> Popular attempts to explain G�del's theorem are often incorrect, and


> the whole matter is very delicate. Philosophers, like Lucas, or
> physicists, like Penrose, illustrate that it is hard to explain

> G�del's result to non logicians. I'm afraid the time has not yet come


> for popular explanation of machine's theology.
>

> Let me try a short attempt. By G�del's theorem we know that for any


> machine, the set of true propositions about the machine is bigger

> than the set of the propositions provable by the machine. Now, G�del

Craig Weinberg

unread,
Jul 12, 2011, 9:17:38 AM7/12/11
to Everything List
Hi Stephen,

I have to do a Part I now and get into Part II later on.

> How does this "causality flows in both directions " work? I have a
>model of something that has that kind of feature, but I am curious about
>yours.

Subjectively we feel, (and see, hear, remember, understand) that we
can voluntarily cause our mind to focus on different subjects or to
exert our will (motive/motor functionality). We know that this
correlates to electromagnetic activity in the brain and nervous system
which can physically cause muscles to contract or relax themselves.
When we choose to move our arm, it's for a semantic reason known by
our conscious mind rather than a biochemical or physiological purpose
which we just imagine is meaningful. We do actually control our body
and conscious mind to some extent and through that are able to control
our responses to our lives to some extent.

If you're looking for a more mechanical explanation of how subjective
will and objective determinism work I would start with objective
properties being rooted in an ontology of separateness added together
by relativity while subjective properties are subtractive as well -
they use your participation to fill in the blanks between seemingly
separate perceptions (I think of 'black magic', the crayon and
toothpick kind: http://paintcutpaste.com/wp-content/uploads/2010/08/DSC_0182.jpg)

> How, exactly, are you defining identity as implicit in your
>question here? To say that X is X, as in the phrase "...what they are
>...", is to assume that you known what X is exactly, no? Is this public
>or private information?

I try to avoid definitions if I can help it (I think they can detract
from meaning as well as clarify), and I'm not very familiar with
philosophy conventions. I'm just talking about an atom can do things
that my idea of an atom could not, since at some point groups of
groups of atoms get together and form a living cell which eventually,
we know, can host or facilitate human consciousness. As far as X is X,
I don't think that's strictly true. In that sentence the first X is
located five chars to the left of the second X, which is followed by a
comma rather than a space. X is only X because we subjectively make
that semantic equation. In an absolute sense, nothing is anything else
but what it is. There is no truly identical identity.

> Are you taking into account, for example, decoherence? Are you
>assuming a classical or quantum world?

Yes, I'm aware of decoherence. As with probability and superposition
it can be used by QM to explain away just about anything that may
threaten it. I think that QM is likely to be the postmodern version of
Ptolemaic deferent and epicycle as far as it being useful (and precise
to a fantastic degree in the case of QM...because it's the consequence
of extreme occidental focus rather than pre-occidental archaic) but
ultimately getting it completely wrong. I think the whole Standard
Model needs to be completely reimagined as a map of observed atomic
moods rather than physical phenomena.

> What difference in kind is there between a component that is
>equivalent in function *and* is integrable with the system to be
>substituted? To say that it is made of cobalt alloy would be merely an
>argument from illicit substitution of identicals!

Not entirely sure what you're asking. I'm just saying that the
function we assume isn't necessarily the only factor. I don't know if
it's an illicit substitution, I'm just saying cobalt blood isn't
identical (enough) for the body to treat it as blood for all of the
functions that blood performs. If it's not cells for instance, maybe
your bone marrow goes crazy and produces leukocytes, or maybe it
atrophies and you become dependent on the synthetic blood. You can't
assume that just because a fluid delivers oxygen that you can use it
instead of blood indefinitely, and you can't assume that a silicon
sculpture of neural logic can be used to feel anything.

> How is the specification of wires relevant to the claim?

Earlier I had said that a tangle of wires isn't going to feel anything
regardless of how long or tangled it is. Jason responded that he
thinks it can. I'm asking what else can wires do? Everything? Can
anything do anything if put into the right shape? I think organization
doesn't matter at all unless the units you are organizing have
potentials to develop those particular emergent properties you desire.

> Umm, are you not implicitly assuming cartoons in the process of
>generation where the constructors of the cartoons have, as available
>information, the changing positions of colored lines and points?

I don't think so. I'm looking at a finished cartoon as it is being
watched and saying that it is a machine of visual image, different
from computer logic only in it's physical substrate.

> From whence obtains meaning? Is the yellow an illusion or some
>phantom to bewitch the mind? How do you know what yellow is like from
>the first person aspect of an algae? I don't think that they do not, but
>exactly how could they, in your opinion?

Yellow is visual feeling. I don't know what algae sees, I'm just
speculating that our cone cells seem to be resolving optical sense at
a single celled level. Since life on this planet originated from
phytoplankton that photosynthesize, I'm connecting the dots that the
retina is practicing a form of photosynthesis and we see what it sees
- more or less.

> How could you know if you could not act that question to the rock?
>So the question become whether or not communication is possible with you
>and a rock.

Right, we can't know. I don't think communication is possible with a
rock, although it is possible that some people with unusual
sensitivity could pick up images or experiences from reading objects
or places. That's at the far subjective end of the spectrum though,
pushing perception out so far oriental that it comes out the other
side. Too flaky to rely upon.

>Where the specifics of a language and the attributions of
>meanings to the objects of experience the result of a computation? If
>not, what determined them? If they are not determined then how are they
>different from noise?

Where they are is in/from/through the singularity and it's projection
of self division through time-space induction. It's not a computation,
more like a gigantic lookup table, but accessed blind. Pure
speculation of course.

> "Idiosyncratic" http://www.thefreedictionary.com/idiosyncratic " A
>structural or behavioral characteristic peculiar to an individual or
>group." What determined or selected that group from the universe of all
>similar entities? If that selection process is not equivalent to some
>computational process then it is indistinguishable from noise, aka, some
>random process! (Please do not involve some form of a "god in the gap"
>argument!)

What determines the color of a particular stitch in a tapestry? Sense.
Appropriateness. The universe makes sense on a lot of different
levels. How things seem to us is neither random, nor noise, nor
computational process, it's multivalent, participatory, semantic
coherence. Detection, sensation, perception, feeling, instinct,
cognition, intuition, genius, creativity, calculation, computation,
communication... all points on the continuum of sense. It doesn't only
compute, it collapses and simplifies - iconicizes, essentializes,
extracts, guesses, jumps to conclusions, recapitulates, expresses,
etc.


>> Red is also different from sour, does that mean sour is a color? You
>> don't need color to tell berries from bush. It could be accomplished
>> directly without any sensory mediation whatsoever, just as your
>> stomach can tell the difference between food and dirt. (Not that the
>> stomach cells don't have their own awareness of their world, they
>> might, just not one that requires us to be conscious of it)

>>> That would be confusing, I couldn't tell if I were looking at a bush or
>>> eating. I wouldn't know the relative position of the bush in relation to
>>> myself or other objects either.
>> You're trying to justify the existence of vision in hindsight rather
>> than explaining the possibility of vision in the first place. Again,
>> omnipotence would be really convenient for me, it doesn't mean that my
>> body can magically invent it out of whole cloth.

>[SPK]
> As in "I think therefore I was!" as opposed to the a priori "I
>think therefore I am"? Omnipotence would not solve the problem of
>computation here! Not only would you need infinite physical resources,
>but you would also need infinite time to perform the computation, or
>else you have to admit a random process caused it to be the only case of
>colors that you experience!!! The dichotomy is not false!

Not sure what the cogito has to do with the presumption of the
necessity of color. Omnipotence solves all problems by definition,
doesn't it? I'm just using it as an example to show that it's
ridiculous to think that the idea of color can just happen in a
physical environment that doesn't already support it a priori. It does
not evolve as a consequence of natural selection, not only because it
serves no special function that unconscious detection would not
accomplish, but because there is no precursor for it to evolve from,
no mechanism for cells or organs to generate perception of color were
it not already a built in possibility. I'm saying that color
perception is more unlikely to exist in a purely physical cosmos than
time travel or omnipotence as a possible physical adaptation. I'm
trying to get at Jason's radical underestimation of the gap between
zoological necessity and the possibility of color's existence.

On Jul 12, 12:22 am, "Stephen P. King" <stephe...@charter.net> wrote:
> Hi Craig!
>

Bruno Marchal

unread,
Jul 12, 2011, 1:58:53 PM7/12/11
to everyth...@googlegroups.com

On 09 Jul 2011, at 04:17, Craig Weinberg wrote:

>> You assumptions are not enough clear so I never know if you talk of
>> what is or of what seems to be.
> I'm trying for 'what seems to be what is',

OK. But what is your assumption?

> since what is isn't
> knowable

In which theory. I think that a part of 'what is' is knowable (for
example consciousness). And I think elementary arithmetical conviction
is communicable. I am pretty sure I can prove to you that 17 is a
prime number, or even (less obvious) that the equation x^2 = 2 *( y^2)
has no non null integers solution.


> and what seems to be doesn't matter if it doesn't reflect
> what is.

OK. But the question is: what are you assuming? I get the feeling that
you assume a primitively physical universe.
I am OK with that theory, which might indeed be true, except that even
without QM, the question of the interpretation of the physical laws is
not entirely trivial for me.
But then, as you do, (so you are coherent with comp) you need a non
computationalist theory of mind.
My point is a proof that you are coherent. Sane04 sum up an argument
showing that mechanism (comp) and materialism (physicalism) are
logically (with some nuances) incompatible.

Now, in the branching dilemma materialism XOR mechanism, you keep
materialism, apparently.

I keep doubting, but keeping mechanism for the sake of the reasoning,
transforms the mind-body problem into a body problem in theoretical
computer science (which is a branch of number theory).
The mind theory is then very natural: it is the study of what machine
can prove, know, observe, feel, hope about herself.
The matter theory is counterintuitive. But not so much weird than most
interpretation of QM.

The theory of everything becomes number theory.
And then a miracle occurs! By the incompleteness theorem of Gödel,
which is among what machine can prove, numbers can distinguish (or
numbers get deluded, I don't know) provability from knowledge,
observation, sensations, etc.


>
>> I limit the mystery to the numbers through the notion of machines
>> and self-reference.
> If you limit the mystery, then won't what you get back be defined by
> how you have defined those limits?

Sorry. I was unclear. Consciousness and Matter are the mysteries I
work on. What I pretend, is two things:

1) if you (at least) agree that your daughter marries a guy who got,
to survive some diseases, an artificial heart, an artificial kidney,
and an artificial brain. The heart is "just" a pump, and the brain is
"just" a computer. The idea here is that the brain is a natural carbon
based computer. Computer, as it happens, can all emulate each others.
Well, If you agree to think about that hypothesis, you can see that we
have literally no choice: we have to extract the physical patterns and
the reason of their stability in the way "machine's dreams" can become
first person sharable, and relate to more particular universal number.

2) Some Löbian machine already exists, like PA and ZF, and are very
well studied, and thanks to the work of Gödel and others, we can
axiomatize completely the theology of the universal machine.
The proper theology is just computer science minus computer's computer
science. In this epoch you can also paraphrazed it by Tarski minus
Gödel (truth on computer minus what computers can prove).
But computer can do much more things than proving, than can know,
observe, etc. Even in the "naïve" theory of ideally correct machine,
with believable = provable, knowable = provable and true, observable =
provable and consistent, feelable (sorry for that word) = provable and
consistent and true.


>
>> Consciousness content, like fear, can modify the matter distribution
>> around. At a deeper level, we select the realities which support us
>> since a long time (deep computation).
> I think that's true or half true, but not even the most evolved lama
> or enlightened yogi can fail to react to multiple bullets fired
> through their head or a massive dose of cyanide.


Of course. Although we don't know, for sure, their first person
experiences.

>
>> The problem is to relate them to third person sharable notions.
> They can't be related except through direct neurological intervention.

?

Are you using an brain-mind identity thesis. I guess so. It is OK,
because, well you believe that your daughter married a (philosophical)
zombie.


> There is never going to be a quantitative expression to bring the
> color blue to a mind which is part of a brain that has never seen
> blue.

OK. (Except serendipitously)

> You can, however, potentially intervene upon the brain
> electronically, perhaps simulate a conjoined twin connection, and
> create a memory of blue. Blue cannot be described quantitatively
> however.

You are right on this. But "Blue cannot be described quantitatively"
is a qualitative assertion, and machines can make qualitative
assertion too. They too can understand that their qualia are not
communicable.

> An electromagnetic wavelength is not a visual experience,

Nor is the virtual cortex functioning.

> it's just a measurement of linear quantity.
>
>> We can still doubt all the content of consciousness
> Then why not doubt the doubt of all the content of consciousness?

Because consciousness is a fixed point on it. You can't doubt it
because to genuinely doubt, you have to be conscious.
That does not prove (even to you) that you are conscious, that proves
only that from your own private first person experience, you cannot
genuinely doubt it.

>
>> If a part remains not explainable, then it would be nice to have a
>> meta-explanation for
>> that. (and this happens with the logic of self-reference)
> Not sure I'm following. The meta explanation is that physics and
> perception are two sides of a coin which function in two very
> different ways but they overlap in certain ways.

And what is the coin?

With comp, physics is the border of the universal mind (to be short).
But the mind identity is no more one-one, but one-many. My point is
that this is testable, and the quantum weirdness seems to qualify for
a good candidate for the computationalist weirdness.

>
>>> If you trust both perception and physics
>>
>> But that is exactly what we should not trust too much, and especially
>> not take literally.
> I think it's okay to trust them as long as you understand that the
> trust you place in either direction has consequences. I want bridge
> builders to take physics very seriously and I want artists to take
> their perception very seriously. For myself, I want to be able to
> focus on whatever frequencies along the continuum are most appropriate
> for the context (sanity).

I am afraid you should take math much more seriously, and theology,
also.
I mean the theology behind the superstition, the fear sellers, etc.

>
>> I think you are bringing some identity thesis, which might force you
>> to bring infinities in the picture to make it coherent. But You are
>> not precise enough to make it appears.
> Does this help? http://www.stationlink.com/art/dualism5.jpg

Rational dualism is consistent with computationalism. Indeed a
rational octalism, I could say, appears. The eight simpler intensional
variants of the logic of self-reference.

But I don't see the theory. My feeling is that you mystify both matter
and mind, and their relation. You might have the correct intuition,
but I don't see a clue on what is matter and what is mind, just an
association which violates comp, and thus introduces infinities, that
you will have to handle.

Bruno


>
>
>
>
> On Jul 8, 5:23 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
>> On 08 Jul 2011, at 14:46, Craig Weinberg wrote:
>>
>>>> That's what I thought he said. But I see no reason to suppose a UD
>>>> is
>>>> running, much less running without physics. We don't know of any
>>>> computation that occurs immaterially.
>
>>
>>> All computation occurs materially and immaterially. An abacus
>>> doesn't
>>> count itself. You ultimately have to have a conscious interpreter to
>>> signify any particular text as quantitatively meaningful.
>>
>> The idea here is that a universal intepreter (and I think abacus does
>> that job) is enough. And then to reason.
>> You assumptions are not enough clear so I never know if you talk of
>> what is or of what seems to be.
>>
>>> Unplug all
>>> monitors from all computers and what do you have left? Expensive
>>> paperweights.
>>
>>> Why not just see perception as both local-solipsistic and generic-
>>> universal?
>>
>> I think Rex has defend such a view. It does not satisfy me. you start
>> from the mystery. I limit the mystery to the numbers through the
>> notion of machines and self-reference.
>>
>>> Isn't that exactly what it seems to be -
>>
>> Well, but that is not an argument for a platonist. If it seems like
>> this, it is certainly not this. You do describe; perhaps correctly, a
>> first person experience. The problem is to relate them to third
>> person
>> sharable notions.
>>
>>> a phenomena which
>>> both seamlessly integrates psychological experience and physical
>>> existence together in some contexts and clearly distinguishes
>>> between
>>> them in others? If that's the case, then why not see that
>>> principle of
>>> a meta-dualism which is a continuum between a dualism and two
>>> monisms
>>> (each representing each other as the opposite of themselves) as the
>>> principle governing all phenomena, all the way up and down the
>>> macrocosm-mesocosm-microcosm.?
>>
>>> If you can't trust perception, then why do you suppose that you can
>>> trust your perception that you can't trust perception?
>>
>> That is a nice argument, but it shows that we cannot doubt
>> consciousness. We can still doubt all the content of consciousness,
>> except this one.
>> This does not force us to start from that concept, except by
>> accepting
>> its existence, and that it has to be explained. If a part remains not
>> explainable, then it would be nice to have a meta-explanation for
>> that. (and this happens with the logic of self-reference)
>>
>>
>>
>>> If you can't trust physics then how do you explain the fact that
>>> physical entities (bullets, psychoactive molecules) affect
>>> consciousness but not the other way around?
>>
>> Consciousness content, like fear, can modify the matter distribution
>> around. At a deeper level, we select the realities which support us
>> since a long time (deep computation).
>>
>>
>>
>>> If you trust both perception and physics
>>
>> But that is exactly what we should not trust too much, and especially
>> not take literally.
>>
>>> then all you have to do is
>>> identify the relationship between them as the most likely aspect
>>> to be
>>> distorted by both perception and physics, and the most defining of
>>> our
>>> subjective condition as a particular subjective phenomenon.
>>
>> I think you are bringing some identity thesis, which might force you
>> to bring infinities in the picture to make it coherent. But You are
>> not precise enough to make it appears.
>>
>> Bruno
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>> Yes, perception can be tricked and exposed as a limited neurological
>>> phenomenon, however under most circumstances, our perception somehow
>>> seems to do quite an admirable job of passing on to us precise
>>> meanings and high quality information from both straightforward
>>> physical sources and more mysterious and creative psychological
>>> sources. The integrity of that information, as it passes through
>>> countless neurological transductions - from optical-sonic
>>> correlations
>>> to gestalt memory associations, is what perception is; not just the
>>> final neurological rattlings, it's the whole thing. Sense is
>>> universal. Not human sense of course. Not physical sense, and not
>>> psychological sense, but the sense period, common and uncommon, is
>>> the
>>> thread that binds it all together. Whether it's the string of String
>>> theory, or a strand of DNA, or a string of alphanumeric
>>> characters, a
>>> conversation thread, etc. it's all about pattern and sense.


>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Everything List" group.

>>> To post to this group, send email to everyth...@googlegroups.com
>>> .

>>> To unsubscribe from this group, send email to everything-li...@googlegroups.com
>>> .

>>> For more options, visit this group athttp://groups.google.com/group/everything-list?hl=en
>>> .
>>
>> http://iridia.ulb.ac.be/~marchal/
>

> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/

Craig Weinberg

unread,
Jul 12, 2011, 1:59:48 PM7/12/11
to Everything List
Part II

> What is your source of that information?
About human tetrachromats?
http://www.klab.caltech.edu/cns186/papers/Jameson01.pdf

Everything else is just my hypothesis.

>To "suspect that "..." is
>to bet that "..." is true. How different is that from what Bruno is
>talking about with the "Yes, Doctor"? You seem to be using Bruno's
>definition of /Theaetetian/ conception of knowledge without even
>acknowledging it! What is holding you back?

I don't get the connection. From Bruno's Yes, Doctor I get the idea of
substitution level, although most of what I'm talking about isn't to
do with prosthetic computation, it's about a topological hypothesis of
ontology. I haven't been able to make sense of Bruno's Theaetetian
conception yet so I can't say if I'm telepathically plagiarizing him.

> Seriously, Craig, you are asking for too much! A lack of an
>explanation that you can understand is not evidence of falsehood! How do
>you know that you understand the idea?

I think I understand Jason's idea if that's what you're referring to,
I just reject it on the grounds that it is contingent upon the
existence of something which I consider to be a logical impossibility.
There can be no ancestor of red. It either has red or it doesn't. It
can't be something that is almost color but still a little bit goat
horn. To quote you in the future... non-sequitur,

>At best you can bet that you are
>correct; you can not be certain. Yes, you can have certainty that X is X
>and that it cannot contradict its own existence, but what can this tell
>you of the properties of X?

It can tell you that you know more about X or red than you think you
do. If that's what you're asking.

> Knowledge of the "truth values" of questions
>about the properties of X implies that you can process the meaning of X
>is {a, b, c, ...} statements. How exactly do you "process meanings"?

Not sure what this means really. Meanings are not processed, they are
revealed. Understood (the etymology of understand gives a better sense
of this *nter-standing as in, entero, something that supports you in
the gut, that settles you as it settles within you). The gap between
the sense of what you are and what the meaning is closes so that the
sensorimotor circuit is completed - irrespective of physical presence.
You can understand things which are not physically present, but some
semblance of their meaning is semantically present.

>You use your brain.

More accurate to say that I am my brain? I don't use a brain to think,
I am a brain that thinks.

> If that brain is hardwired from DNA to process some
>range of frequencies as "red" then guess what, u will see red when some
>EMF excitation stimulated some rod or cone in the retina of your eye...

Where does the DNA get red from?

> All of this physical process involves work that generates entropy.
>So there is a physical aspect to this.

I would say that since sensorimotive phenomena is the interior side of
electromagnetism, and is it's ontological opposite, that qualia
generates negentropy which balances the existential-relativity-entropy
side.

>> If that were true, then unplugging your monitor would change the
>> content of the internet. Regardless of the form a computer presents
>> it's data to us in, it is processed the same way to itself, machine
>> language, bytes.

>[SPK]
> Non-sequitur.

I'm just saying that formatting is important to us, not to the
computer. It's a false equivalence to presume that just because you
see information formatted through a human friendly presentation layer
doesn't mean that that layer has it's own awareness. It's a drawing. A
cartoon.

> Don't know. That's more of a cosmological question. The ontology of
> awareness is not only mysterious, it is mystery itself.

{SPK]
obscurum per obscurius?

Yes and no. Mystery arises from the privatization of sense through the
subjective topology. Sensorimotive experience gives rise to mystery
just as wealth gives rise to poverty. Knowing means knowing that you
don't know, which is another way of saying that the self feels what it
is by feeling what it is not (how else could there be a self?)

> I agree, but we need to show necessitation of the
>"organic-somatic-neurological".

The interior topology is not about necessity, it's about freedom,
imagination, joy, violence. Color exists because it is desirable. On
the subjective side of the curtain, the universe, she just wanna have
fun.

>That is just 'level of substitution" specifications!

Not getting the connection.

> And what exactly defined "sense" as in "beneath
>arithmetic is sense"? Whose "sense"? Are you claiming that Consciousness
>is prior to Existence?

I doubt that whatever sense gives rise to arithmetic sense would be
recognizable to us as Consciousness, but since it's beyond time and
space, it could be described as both absolutely omniscient, absolutely
unconscious, and maybe even relatively semi-conscious too. Sort of
like Yahweh-Cthulhu-frisbee-akashic records-interior of the big bang.

> What is the difference between fundamental and irreducible? The
>example you are giving about colors, in that they are like prime
>numbers, is contingent on the metric that the other colors in our
>spectrum define! T o say that X is relatively prime to Y is not a proof
>that X is not divisible by Z. You are not showing a difference in kind
>that would specify a categorical difference.

In that context I'm using fundamental as a category for things that
are indispensable, as opposed to things which are notable only in
their purity. The mechanical properties of color are not really
relevant. You can mix two white chemicals and get something yellow.
I'm not trying to posit a universal alchemical nature of color
primacy. It only interests me at all as far as it lets me point out
how blue blue is and how unlike anything it is from everything that
has no blue in it. Purple, however, is like blue but blue has no
purple in it, so it's a secondary color. But this whole sub thread is
a distraction, my only point is that it's significant that certain
colors exist and no more can be conceived of and how different that is
from a physical-quantifiable-computable phenomenon. Not trivially
different, but insurmountably, ontologically different.

> Why is that? Because the Romba does not have a subroutine in its
>program that generates a model of the room and a model of a Romba in it
>that is updated and corrected by information that the physical Romba
>system would acquire via sonar whatever subsystems.
> Check out these robots! http://www.youtube.com/watch?v=ehno85yI-sA

Awesome robot! It still has no sensorimotive participation with it's
environment. The behavior we see is electromagnetic only. Inside the
starfish components there is still only automatic execution of
meaningless, though impressively adaptive instruction sets.

On Jul 12, 12:22 am, "Stephen P. King" <stephe...@charter.net> wrote:
> definition of /Theaetetian/ conception of knowledge without even
> acknowledging it! What is holding you back?
>
>
>
> >>> A longer beak, yes. Prehensile tail,
> >>> sure. You've already got the physical structure, it just gets
> >>> exaggerated through heredity. Where is the ancestor of red though?
> >> The first being which had both senses capable of distinguishing different
> >> frequencies of light, and a brain capable of integrating those differences
> >> into the environmental model of that being.  It is likely that this being
> >> did not perceive red light in the same way we do, it is even possible you
> >> don't perceive red in the same way I do.  For all we know, your brain may be
> >> the ancestor of red as you know it.  Two people can taste the same thing,
> >> and one person likes it while the other dislikes it, just like two people
> >> can read the same book and like it or dislike it.  It depends on the
> >> structure of their brains.
> > Meh, that's just an appeal to uncertainty. It doesn't explain what red
> > was before it was red nor why the fact that it cannot be conceived
> > doesn't make it different from something physical like a beak for
> > which an ancestral form can easily be imagined.
>
> [SPK]
>      Seriously, Craig, you are asking for too much! A lack of an
> ...
>
> read more »

Jesse Mazer

unread,
Jul 12, 2011, 2:16:44 PM7/12/11
to everyth...@googlegroups.com
Craig, I wonder what you'd think of Chalmers' "Absent Qualia, Fading Qualia, Dancing Qualia" argument at http://consc.net/papers/qualia.html which to me makes a strong argument for "organizational invariance", which says physical systems organized the same way should produce the same qualia, so for example a computer which simulated each of my neurons and their interactions with sufficient accuracy would give rise to the same qualia as my biological brain. The basic idea of the argument is that if you gradually replaced my brain's neurons with computer chips that simulated the behavior of the removed neurons and had the same input/output relationships, my qualia should not change or fade in any reasonable theory of consciousness (an unreasonable one would be one that had a total disconnect between qualia and behavior, so that for example my qualia could be gradually fading or changing, or even changing on a second-by-second basis, and yet behaviorally I would argue emphatically that they were remaining unchanged)

Jesse

Bruno Marchal

unread,
Jul 12, 2011, 3:58:02 PM7/12/11
to everyth...@googlegroups.com

On 11 Jul 2011, at 23:57, Craig Weinberg wrote:

> I'm having trouble understanding what you're saying.
>
>>> Computer chips don't behave in the same way though.
>
>> That is just a question of choice of level of description. Unless you
>> believe in substantial infinite souls.
>
> Not sure what you mean in either sentence. A plastic flower behaves
> differently than a biological plant.

Sure. But they have not the same function.

> A computer chip behaves
> differently than a neuron.

Not necessarily. It might, if well programmed enough, do the same
thing, and then it is a question of interfacing different sort of
hardware, to replace the neuron, by the chips.


> Why assume that a computer chip can feel
> what a living cell can feel?

Because all known laws of nature, even their approximations, which can
still function at some high level, are Turing emulable. In the case of
biology, there is strong evidence that nature has already bet on the
functional substitution, because it happens all the time at the
biomolecular level.
Even the quantum level is Turing emulable, but no more in real time,
and you need a quantum chips. But few believes the brain can be a
quantum computer, and it would change nothing in our argumentation.


>
>>> Your computer
>>> can't become an ammoniaholic or commit suicide.
>
>> Why?
>
> I'm talking about your actual computer that you are reading this on.
> Are you asking me why it can't commit suicide or spontaneously develop
> a hankering for ammonia?

Because, it is a baby, and its universality is exploited by the
sellers, or the nerds.
And we don't allow it any form of introspection, except some disk
verification. So it has no reason, and no real means, to think about
suicide. He has still no life, except that (weird) form of blank
consciousness I begin to suspect. My computer is not a good example,
when talking about computers in general. By computers I mean universal
machine, and this is a mathematical notion.

A physical computer seems to be a mathematical computer implemented in
a well, another probable universal being in some neighborhood. With
comp, they are numerous. With QM, too.

>
>> The other side is well explained in the comp theory.
>
> I'm giving it a good try reading your SANE2004 pdf but I think I'm
> hovering at around 4% comprehension.

That's a bad note! What is the first 5th % that you don't understand?

> If you want me to be able to
> consider your hypothesis I think that you will have to radically
> simplify it's insights to concrete examples which are not dependent
> upon references to anyone else's work, logical/mathematical/or
> philosophical notation, teleportation, or Turing anything.

Read just the UDA. The first seven steps gives the picture. Of course,
you have to be able to reason with an hypothesis, keeping it all along
in the reasoning.

>
> As near as I can tell, it seems like you are looking at the hows and
> whys of sensation - how physics and sensation are both logical
> relations

No, they are related to arithmetical relations and set of arithmetical
relations.

> rather than noumenal existential artifacts and why it might
> be necessary. I can't really tell what your answer is though.

God create the natural numbers, all the rest is created by the natural
numbers. Created or subselected by their ancestors in long
computational histories.
Comp leads to a many-world interpretation of arithmetic.

> My focus
> is on describing what and who we are in the simplest way. To my mind,
> what and who we are cannot be described in purely arithmetic
> relations, unless arithmetic relations automatically obscure their
> origin and present themselves in all possible universes as color,
> sound, taste, feeling, etc.

Nice picture. This is what happens indeed.

>
>> No problem. That would mean that the substitution level is low. It
>> does no change the conclusion: the physical world is a projection of
>> the mind, and the mind is an inside view of arithmetic (or comp is
>> false, that is, at all level and you need substantial souls). But we
>> don't even find a substance for explaining matter, so that seems a
>> regression to me. Anyway, it is inconsistent with the comp
>> assumption.
>
> When you say that the physical world is a projection of the mind, do
> you mean that in the sense that it might be possible to stop bullets
> directly with our thoughts or in the sense of physicality only seeming
> physical because our mind is programmed to read it as such?


It is in between. Because physics is not the projection of the human
mind, but the projection of all universal (machine (number)) mind. So,
we can' change the laws of physics by the power of the mind, but we
can develop degrees of independence. That is why we can fly, and go to
the moon.


> I would
> agree that physicality arises only from the body's own physical
> composition and our mind's apprehension of the body's awareness of
> itself in relation to it's world, but I wouldn't say that physical
> matter is a mental phenomenon. By definition, mental phenomena are
> exempt from physical constraints, such as gravity, thermodynamics,
> etc.

They are not; even in Platonia. You have to grasp at least up to the
step^seven to see what I mean. I am not trying to propose a solution.
I just show that materialism and mechanism are not comptaible, and
then than mechanism propose a path toward the solution, which consists
in a sort of dialog with a universal (Löbian) machine.


>
> I don't know about the mind being an inside view of arithmetic. I
> would say that arithmetic is only one category of sense and see no
> reason to privilege it above aesthetic sense or anthropomorphic sense.

It is simple and Turing universal. I could chose any first order
logical specification of a universal system instead of arithmetic, but
arithmetic is much well known.

> Sense is the elemental level to me. Pattern and pattern detection.
> Counting is just another pattern. Not all patterns can be reduced to
> something that can be counted.


The notion of universal machine provides just that. It is not trivial.
This is what the mathematician have discovered in the 1920-30. I can
explain you that this is possible, although there is a BIG price;
which is that universal can crash, and no one can really predict it in
general.


> Some things have to be named. Still
> others cannot be named or numbered.

Yes. Theoretical computer science is full or result with that shape.


>
>> But computer science explains why and how such feelings occur.
>
> Computer science explains why pain exists?

In the case of pain, the why is easy. It provides motivation in the
game of life (to eat or to be eaten).

The complex problem is how pain are possible, and yes, I think that
computer science has interesting things to say here.


>
>> If you get the six or seven first steps, it is an easy exercise to
>> show that matter cannot be cloned. Ask if you have any difficulty.
>
> Unfortunately I can't really get any of the steps.

I think it is a problem of motivation, or prejudice (like nothing can
make me doubt on the primaty cahracter of physics, or something).

Try again, or ask question, at any step. Or never mind. Despite you
don't seem to have a theory, UDA shows that you are correct in
rejecting comp, for saving primitive matter. Knowing that might help
you to begin your theory of mind-matter. It already says that you will
need some infinities.

Bruno

>
>
> On Jul 11, 4:26 am, Bruno Marchal <marc...@ulb.ac.be> wrote:

>> On 11 Jul 2011, at 04:17, Craig Weinberg wrote:d. What is it that


>>> explains non-cloning of matter? comp? Give me some details and I'll
>>> try to understand.
>>
>> Read http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHALAbstract
>> ...
>>

>> If you get the six or seven first steps, it is an easy exercise to
>> show that matter cannot be cloned. Ask if you have any difficulty.

http://iridia.ulb.ac.be/~marchal/

Jason Resch

unread,
Jul 12, 2011, 5:30:02 PM7/12/11
to everyth...@googlegroups.com
On Tue, Jul 12, 2011 at 8:17 AM, Craig Weinberg <whats...@gmail.com> wrote:

Not sure what the cogito has to do with the presumption of the
necessity of color. Omnipotence solves all problems by definition,
doesn't it? I'm just using it as an example to show that it's
ridiculous to think that the idea of color can just happen in a
physical environment that doesn't already support it a priori. It does
not evolve as a consequence of natural selection, not only because it
serves no special function that unconscious detection would not
accomplish, but because there is no precursor for it to evolve from,
no mechanism for cells or organs to generate perception of color were
it not already a built in possibility. I'm saying that color
perception is more unlikely to exist in a purely physical cosmos than
time travel or omnipotence as a possible physical adaptation. I'm
trying to get at Jason's radical underestimation of the gap between
zoological necessity and the possibility of color's existence.


I think the problem with Chalmer's view, is that by assuming a universe without qualia (or philosophical zombies) are possible, it inevitably leads to substance dualism or epiphenominalism.  If zombies are possible, it means that consciousness is something extra which can be taken away without affecting anything.  Thus, conscious would have no effects, which I think is against your view.  Are you familiar with this: http://www.philforum.org/documents/An%20Unfortunate%20Dualist%20(Raymond%20Smullyan).pdf ?
If not, it can give you a feel for why zombies may be logically impossible.  So what is your thought on this subject?  Can a universe exist just like ours but have different qualia or none at all?

My view is that qualia are necessary and identical anywhere an identical processing of information, at some substitution level, is performed.  Thus, if it is done by a computer or a human, or a human in this universe or another universe, or a computer in this universe or a person in a different universe, the resulting qualia will be the same, because I believe qualia are a property of the mind, not a property of the physics on which the mind is built.

Jason

Craig Weinberg

unread,
Jul 12, 2011, 6:50:12 PM7/12/11
to Everything List
Thanks, I always seem to like Chalmers perspectives. In this case I
think that the hypothesis of physics I'm working from changes how I
see this argument compared to how I would have a couple years ago. My
thought now is that although organizational invariance is valid,
molecular structure is part of the organization. I think that
consciousness is not so much a phenomenon that is produced, but an
essential property that is accessed in different ways through
different organizations.

I'll just throw out some thoughts:

If you take an MRI of a silicon brain, it's going to look nothing like
a human brain. If an MRI can tell the difference, why can't the brain
itself?

Can you make synthetic water? Why not?

If consciousness is purely organizational, shouldn't we see an example
of non-living consciousness in nature? (Maybe we do but why don't we
recognize it as such). At least we should see an example of an
inorganic organism.

My view of awareness is now subtractive and holographic (think pinhole
camera), so that I would read fading qualia in a different way. More
like dementia.. attenuating connectivity between different aspects of
the self, not changing qualia necessarily. The brain might respond to
the implanted chips, even ruling out organic rejection, the native
neurology may strengthen it's remaining connections and attempt to
compensate for the implants with neuroplasticity, routing around the
'damage'. Qualia could also become more intense as the native brain
region gets smaller. Loudness seems to correlate with stupidity rather
than quiet behavior - maybe there's a reason for that. Maybe people
with less integrated neurons live in a coarser, more percussively
energitic version of the universe?

Of course, it's possible that silicon will not present as much of an
organizational incompatibility as I'm guessing, but my hunch is that
even if you could pull it off with chips, you would end up having to
reinvent living cells in semiconductor form before you can get feeling
out of them. I think there is a lot of organic firmware in there that
is not going to be supported on a solid state platform. Life needs
water. Our feelings need cells that need water. I see no reason to
think that water is less of a part of human consciousness than is
logic.


On Jul 12, 2:16 pm, Jesse Mazer <laserma...@hotmail.com> wrote:
> Craig, I wonder what you'd think of Chalmers' "Absent Qualia, Fading Qualia, Dancing Qualia" argument athttp://consc.net/papers/qualia.htmlwhich to me makes a strong argument for "organizational invariance", which says physical systems organized the same way should produce the same qualia, so for example a computer which simulated each of my neurons and their interactions with sufficient accuracy would give rise to the same qualia as my biological brain. The basic idea of the argument is that if you gradually replaced my brain's neurons with computer chips that simulated the behavior of the removed neurons and had the same input/output relationships, my qualia should not change or fade in any reasonable theory of consciousness (an unreasonable one would be one that had a total disconnect between qualia and behavior, so that for example my qualia could be gradually fading or changing, or even changing on a second-by-second basis, and yet behaviorally I would argue emphatically that they were remaining unchanged)
> Jesse                                    

meekerdb

unread,
Jul 12, 2011, 7:10:54 PM7/12/11
to everyth...@googlegroups.com
On 7/12/2011 2:30 PM, Jason Resch wrote:


On Tue, Jul 12, 2011 at 8:17 AM, Craig Weinberg <whats...@gmail.com> wrote:

Not sure what the cogito has to do with the presumption of the
necessity of color. Omnipotence solves all problems by definition,
doesn't it? I'm just using it as an example to show that it's
ridiculous to think that the idea of color can just happen in a
physical environment that doesn't already support it a priori. It does
not evolve as a consequence of natural selection, not only because it
serves no special function that unconscious detection would not
accomplish, but because there is no precursor for it to evolve from,
no mechanism for cells or organs to generate perception of color were
it not already a built in possibility. I'm saying that color
perception is more unlikely to exist in a purely physical cosmos than
time travel or omnipotence as a possible physical adaptation. I'm
trying to get at Jason's radical underestimation of the gap between
zoological necessity and the possibility of color's existence.


I think the problem with Chalmer's view, is that by assuming a universe without qualia (or philosophical zombies) are possible, it inevitably leads to substance dualism or epiphenominalism.  If zombies are possible, it means that consciousness is something extra which can be taken away without affecting anything.  Thus, conscious would have no effects, which I think is against your view.  Are you familiar with this: http://www.philforum.org/documents/An%20Unfortunate%20Dualist%20(Raymond%20Smullyan).pdf ?
If not, it can give you a feel for why zombies may be logically impossible.  So what is your thought on this subject?  Can a universe exist just like ours but have different qualia or none at all?

I think there are two different questions in play.  Usually philosophical zombies are defined as acting just like us; but  it is left open as to whether their internal information processing is just like ours.  I think one might be able to create an artificial person who acted just like us, but who had somewhat different internal processing.  They might experience qualia somewhat differently - how would you tell.  My wife and I are always disagreeing about where to draw the line between blue and green.  Is she experiencing the color differently?  Part of the reason we assume other people experience qualia the way we do is that they are built like us.  Suppose after building the artificial person and confirming it acts just like we do, you added a lot of memory capacity so that everything the artificial person looked at was recorded - but not accessed.  Would this produce a difference in qualia? 

Brent


My view is that qualia are necessary and identical anywhere an identical processing of information, at some substitution level, is performed.  Thus, if it is done by a computer or a human, or a human in this universe or another universe, or a computer in this universe or a person in a different universe, the resulting qualia will be the same, because I believe qualia are a property of the mind, not a property of the physics on which the mind is built.

Jason
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To post to this group, send email to everyth...@googlegroups.com.
To unsubscribe from this group, send email to everything-li...@googlegroups.com.

Craig Weinberg

unread,
Jul 12, 2011, 7:49:06 PM7/12/11
to Everything List
>> Not sure what you mean in either sentence. A plastic flower behaves
>> differently than a biological plant.

>Sure. But they have not the same function.

They both decorate a vase. How do we know when we build a chip that
it's performing the same function that a neuron performs and not just
what we think it performs, especially considering that neurology
produces qualitative phenomena which cannot be detected at all outside
of our personal experience. Maybe the brain is a haunted house built
of prehistoric stones under layers of medieval catacombs and the chip
is a brand new suburban tract home made to look like a grand old
mansion but it's made of drywall and stucco and ghosts aren't
interested.

>Because all known laws of nature, even their approximations, which can
>still function at some high level, are Turing emulable.

But consciousness isn't observable in nature, outside of our own
interiority. Is yellow Turing emulable?

>By computers I mean universal
>machine, and this is a mathematical notion.

I don't know, man. I think computers are just gigantic electronic
abacuses. They don't feel anything, but you can arrange their beads
into patterns which act as a vessel for us to feel, see, know, think,
etc.

>That's a bad note! What is the first 5th % that you don't understand?

Each sentence is a struggle for me. I could go through each one if you
want:

"I will first present a non constructive argument showing that the
mechanist
hypothesis in cognitive science gives enough constraints to decide
what a "physical reality"
can possibly consist in."

I read that as "I will first present a theoretical argument showing
that the hypothesis of consciousness arising from purely mechanical
interactions in the brain is sufficient to support a physical reality.
Right away I'm not sure what you're talking about. I'm guessing that
you mean the mechanics of the brain look like physical reality to us.
Which I would have agreed with a couple years ago, but my hypothesis
now makes more sense to me, that the exterior mechanism and interior
experience are related in a dynamic continuum topology in which they
diverge sharply at one end and are indistinguishable in another.

>Read just the UDA. The first seven steps gives the picture. Of course,
>you have to be able to reason with an hypothesis, keeping it all along
>in the reasoning.

I'm trying, but it's not working. I think each step would have to be
condensed into two sentences.

>No, they are related to arithmetical relations and set of arithmetical relations.
Maybe that's the issue. I can't really parse math. I had to take
Algebra 2 twice and never took another math class again. If the
universe is made of math I would have a hard time explaining that. Why
is math hard for some people if we are made of math? Why is math
something we don't learn until long after we understand words, colors,
facial expressions, etc?

>God create the natural numbers, all the rest is created by the natural numbers.
Numbers create things? Why?

>> My focus is on describing what and who we are in the simplest way. To my mind,
>> what and who we are cannot be described in purely arithmetic
>> relations, unless arithmetic relations automatically obscure their
>> origin and present themselves in all possible universes as color,
>> sound, taste, feeling, etc.

>Nice picture. This is what happens indeed.

You are saying that there is an absolute ontological correlation
between numbers and phenomenon, ie all possible spectrums begin with
red, all possible periodic tables begin with Hydrogen - the
singularity of the proton is immutably translated as the properties of
elemental hydrogen in all physical universes?

>It is in between. Because physics is not the projection of the human
>mind, but the projection of all universal (machine (number)) mind.
I can go along with that, although I would not limit the universal
interior order to machine, number, or mind, but rather a more all-
encompassing phenomenology like 'sense' or 'pattern'.

>>By definition, mental phenomena are
>> exempt from physical constraints, such as gravity, thermodynamics,
>> etc.

>They are not; even in Platonia.

You're not saying that Mickey Mouse has mass and velocity though,
right? I don't get it.

>The complex problem is how pain are possible, and yes, I think that
>computer science has interesting things to say here.

Like what?

There might be a bit of a language barrier.. I'm just not sure what
you mean towards the end. Why does the universal machine pretend not
to be a machine?

Craig
On Jul 12, 3:58 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
> On 11 Jul 2011, at 23:57, Craig Weinberg wrote:

Craig Weinberg

unread,
Jul 12, 2011, 8:02:55 PM7/12/11
to Everything List
I think the point of philosophical zombies for Chalmers is not to
invoke dualism or epiphenominalism but to point out that we cannot
tell from the outside what is going on on the inside. I agree with
that, but it's not because human consciousness is a separate thing
from human neurology, but rather they are two separate ends of the
same continuum. Two sides of the same coin with perpendicular
ontologies.

>Can a universe exist just like ours but have different qualia or none at all?

The world you see in a mirror is a universe with some different and
absent qualia. It has no smell, no sound. Things are backwards.

>I believe qualia are a property of the mind, not a property of the physics on which the mind is built.

I reconcile the two by saying that phenomena are quantitative on one
side and qualitative on the other, or electro on one side, magnetic on
side two, sensory on side three, and motive on side four. Maybe I'm a
quadralist.


On Jul 12, 5:30 pm, Jason Resch <jasonre...@gmail.com> wrote:
> On Tue, Jul 12, 2011 at 8:17 AM, Craig Weinberg <whatsons...@gmail.com>wrote:
>
>
>
>
>
>
>
>
>
>
>
> > Not sure what the cogito has to do with the presumption of the
> > necessity of color. Omnipotence solves all problems by definition,
> > doesn't it? I'm just using it as an example to show that it's
> > ridiculous to think that the idea of color can just happen in a
> > physical environment that doesn't already support it a priori. It does
> > not evolve as a consequence of natural selection, not only because it
> > serves no special function that unconscious detection would not
> > accomplish, but because there is no precursor for it to evolve from,
> > no mechanism for cells or organs to generate perception of color were
> > it not already a built in possibility. I'm saying that color
> > perception is more unlikely to exist in a purely physical cosmos than
> > time travel or omnipotence as a possible physical adaptation. I'm
> > trying to get at Jason's radical underestimation of the gap between
> > zoological necessity and the possibility of color's existence.
>
> I think the problem with Chalmer's view, is that by assuming a universe
> without qualia (or philosophical zombies) are possible, it inevitably leads
> to substance dualism or epiphenominalism.  If zombies are possible, it means
> that consciousness is something extra which can be taken away without
> affecting anything.  Thus, conscious would have no effects, which I think is
> against your view.  Are you familiar with this:http://www.philforum.org/documents/An%20Unfortunate%20Dualist%20(Raym...

Jesse Mazer

unread,
Jul 12, 2011, 8:36:37 PM7/12/11
to everyth...@googlegroups.com


> Date: Tue, 12 Jul 2011 15:50:12 -0700
> Subject: Re: Bruno's blasphemy.
> From: whats...@gmail.com
> To: everyth...@googlegroups.com

>
> Thanks, I always seem to like Chalmers perspectives. In this case I
> think that the hypothesis of physics I'm working from changes how I
> see this argument compared to how I would have a couple years ago. My
> thought now is that although organizational invariance is valid,
> molecular structure is part of the organization. I think that
> consciousness is not so much a phenomenon that is produced, but an
> essential property that is accessed in different ways through
> different organizations.

But how does this address the thought-experiment? If each neuron were indeed replaced one by one by a functionally indistinguishable substitute, do you think the qualia would change somehow without the person's behavior changing in any way, so they still maintained that they noticed no differences?


>
> I'll just throw out some thoughts:
>
> If you take an MRI of a silicon brain, it's going to look nothing like
> a human brain. If an MRI can tell the difference, why can't the brain
> itself?

Because neurons (including those controlling muscles) don't see each other visually, they only "sense" one another by certain information channels such as neurotransmitter molecules which go from one neuron to another at the synaptic gap. So if the artificial substitutes gave all the same type of outputs that other neurons could sense, like sending neurotransmitter molecules to other neurons (and perhaps other influences like creating electromagnetic fields which would affect action potentials traveling along nearby neurons), then the system as a whole should behave identically in terms of neural outputs to muscles (including speech acts reporting inner sensations of color and whether or not the qualia are "dancing" or remaining constant), even if some other system that can sense information about neurons that neurons themselves cannot (like a brain scan which can show something about the material or even shape of neurons) could tell the difference.

>
> Can you make synthetic water? Why not?

You can simulate the large-scale behavior of water using only the basic quantum laws that govern interactions between the charged particles that make up the atoms in each water molecule--see http://www.udel.edu/PR/UDaily/2007/mar/water030207.html for a discussion. If you had a robot whose external behavior was somehow determined by the behavior of water in an internal hidden tank (say it had some scanners watching the motion of water in that tank, and the scanners would send signals to the robotic limbs based on what they saw), then the external behavior of the robot should be unchanged if you replaced the actual water tank with a sufficiently detailed simulation of a water tank of that size.

>
> If consciousness is purely organizational, shouldn't we see an example
> of non-living consciousness in nature? (Maybe we do but why don't we
> recognize it as such). At least we should see an example of an
> inorganic organism.

I don't see why that follows, we don't see darwinian evolution in non-organic systems either but that doesn't prove that darwinian evolution somehow requires something more than just a physical system with the right type of organization (basically a system that can self-replicate, and which has the right sort of stable structure to preserve hereditary information to a high degree but also with enough instability for "mutations" in this information from one generation to the next). In fact I think most scientists would agree that intelligent purposeful and flexible behavior must have something to do with darwinian or quasi-darwinian processes in the brain (quasi-darwinian to cover something like the way an ant colony selects the best paths to food, which does involve throwing up a lot of variants and then creating new variants closer to successful ones, but doesn't really involve anything directly analogous to "genes" or self-replication of scent trails). That said, since I am philosophically inclined towards monism I do lean towards the idea that perhaps all physical processes might be associated with some very "basic" form of qualia, even if the sort of complex, differentiated and meaningful qualia we experience are only possible in adaptive systems like the brain (chalmers discusses this sort of panpsychist idea in his book "The Conscious Mind", and there's also a discussion of "naturalistic panpsychism" at http://www.hedweb.com/lockwood.htm#naturalistic )


>
> My view of awareness is now subtractive and holographic (think pinhole
> camera), so that I would read fading qualia in a different way. More
> like dementia.. attenuating connectivity between different aspects of
> the self, not changing qualia necessarily. The brain might respond to
> the implanted chips, even ruling out organic rejection, the native
> neurology may strengthen it's remaining connections and attempt to
> compensate for the implants with neuroplasticity, routing around the
> 'damage'. 

But here you seem to be rejecting the basic premise of Chalmers' thought experiment, which supposes that one could replace neurons with *functionally* indistinguishable substitutes, so that the externally-observable behavior of other nearby neurons would be no different from what it would be if the neurons hadn't been replaced. If you accept physical reductionism--the idea that the external behavior (as opposed to inner qualia) of any physical system is in principle always reducible to the interactions of all its basic components such as subatomic particles, interacting according to the same universal laws (like how the behavior of a collection of water molecules can be reduced to the interaction of all the individual charged particles obeying basic quantum laws)--then it seems to me you should accept that as long as an artificial neuron created the same physical "outputs" as the neuron it replaced (such as neurotransmitter molecules and electromagnetic fields), then the behavior of surrounding neurons should be unaffected. If you object to physical reductionism, or if you don't object to it but somehow still reject the idea that it would be possible to predict a real neuron's "outputs" with a computer simulation, or reject the idea that as long as the outputs at the boundary of the original neuron were unchanged the other neurons wouldn't behave any differently, please make it clear so I can understand what specific premise of Chalmers' thought-experiment you are rejecting.

Jesse

Jason Resch

unread,
Jul 12, 2011, 9:57:35 PM7/12/11
to everyth...@googlegroups.com
On Tue, Jul 12, 2011 at 6:10 PM, meekerdb <meek...@verizon.net> wrote:
On 7/12/2011 2:30 PM, Jason Resch wrote:


On Tue, Jul 12, 2011 at 8:17 AM, Craig Weinberg <whats...@gmail.com> wrote:

Not sure what the cogito has to do with the presumption of the
necessity of color. Omnipotence solves all problems by definition,
doesn't it? I'm just using it as an example to show that it's
ridiculous to think that the idea of color can just happen in a
physical environment that doesn't already support it a priori. It does
not evolve as a consequence of natural selection, not only because it
serves no special function that unconscious detection would not
accomplish, but because there is no precursor for it to evolve from,
no mechanism for cells or organs to generate perception of color were
it not already a built in possibility. I'm saying that color
perception is more unlikely to exist in a purely physical cosmos than
time travel or omnipotence as a possible physical adaptation. I'm
trying to get at Jason's radical underestimation of the gap between
zoological necessity and the possibility of color's existence.


I think the problem with Chalmer's view, is that by assuming a universe without qualia (or philosophical zombies) are possible, it inevitably leads to substance dualism or epiphenominalism.  If zombies are possible, it means that consciousness is something extra which can be taken away without affecting anything.  Thus, conscious would have no effects, which I think is against your view.  Are you familiar with this: http://www.philforum.org/documents/An%20Unfortunate%20Dualist%20(Raymond%20Smullyan).pdf ?
If not, it can give you a feel for why zombies may be logically impossible.  So what is your thought on this subject?  Can a universe exist just like ours but have different qualia or none at all?

I think there are two different questions in play.  Usually philosophical zombies are defined as acting just like us; but  it is left open as to whether their internal information processing is just like ours. 

That may be one definition.  The way I have heard zombies defined is that they are in all ways, physically indistinguishable; that there is no physical test that could ever tell apart a zombie from a non-zombie.  I was using this definition above in my example and reasoning.

Jason

Craig Weinberg

unread,
Jul 12, 2011, 10:31:58 PM7/12/11
to Everything List
Oh, yeah I would agree with you if you are talking real world live
healthy human bodies then they are going to have a human experience.
In a hypothetical, you could not know whether a person was a zombie or
not for sure, just because subjectivity is airtight, but mechanically
there is no way to take away a person's soul without changing them
physically.

On Jul 12, 9:57 pm, Jason Resch <jasonre...@gmail.com> wrote:
> On Tue, Jul 12, 2011 at 6:10 PM, meekerdb <meeke...@verizon.net> wrote:
> > **
> > On 7/12/2011 2:30 PM, Jason Resch wrote:
>
> > On Tue, Jul 12, 2011 at 8:17 AM, Craig Weinberg <whatsons...@gmail.com>wrote:
>
> >> Not sure what the cogito has to do with the presumption of the
> >> necessity of color. Omnipotence solves all problems by definition,
> >> doesn't it? I'm just using it as an example to show that it's
> >> ridiculous to think that the idea of color can just happen in a
> >> physical environment that doesn't already support it a priori. It does
> >> not evolve as a consequence of natural selection, not only because it
> >> serves no special function that unconscious detection would not
> >> accomplish, but because there is no precursor for it to evolve from,
> >> no mechanism for cells or organs to generate perception of color were
> >> it not already a built in possibility. I'm saying that color
> >> perception is more unlikely to exist in a purely physical cosmos than
> >> time travel or omnipotence as a possible physical adaptation. I'm
> >> trying to get at Jason's radical underestimation of the gap between
> >> zoological necessity and the possibility of color's existence.
>
> > I think the problem with Chalmer's view, is that by assuming a universe
> > without qualia (or philosophical zombies) are possible, it inevitably leads
> > to substance dualism or epiphenominalism.  If zombies are possible, it means
> > that consciousness is something extra which can be taken away without
> > affecting anything.  Thus, conscious would have no effects, which I think is
> > against your view.  Are you familiar with this:
> >http://www.philforum.org/documents/An%20Unfortunate%20Dualist%20(Raym...<http://www.philforum.org/documents/An%20Unfortunate%20Dualist%20%28Ra...>?

Bruno Marchal

unread,
Jul 13, 2011, 10:53:55 AM7/13/11
to everyth...@googlegroups.com

Evgenii,

>
> Why don't you make a course for dummies about this? (For example in
> Second Life)

Because in the second life, the students already know that they are in
a virtual reality :)

It looks more difficult to explain this with first life inquirers.

But is it, really? Got the feeling that those who don't understand are
those who don't study, or don't make the necessary work. Psychological
contingent reasons? (I think on UDA, not on AUDA, which needs a one
year course in mathematical logic/computer science).

But your suggestion is pleasing and fun, and who knows, I might think
about it.
That will not cure my computer addiction, though :(

Bruno

>> Popular attempts to explain Gödel's theorem are often incorrect, and


>> the whole matter is very delicate. Philosophers, like Lucas, or
>> physicists, like Penrose, illustrate that it is hard to explain

>> Gödel's result to non logicians. I'm afraid the time has not yet come


>> for popular explanation of machine's theology.
>>

>> Let me try a short attempt. By Gödel's theorem we know that for any


>> machine, the set of true propositions about the machine is bigger

>> than the set of the propositions provable by the machine. Now, Gödel

> --
> You received this message because you are subscribed to the Google
> Groups "Everything List" group.
> To post to this group, send email to everyth...@googlegroups.com.
> To unsubscribe from this group, send email to everything-li...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/everything-list?hl=en
> .
>

http://iridia.ulb.ac.be/~marchal/

Bruno Marchal

unread,
Jul 13, 2011, 5:43:41 AM7/13/11
to everyth...@googlegroups.com

On 13 Jul 2011, at 01:49, Craig Weinberg wrote:

>>> Not sure what you mean in either sentence. A plastic flower behaves
>>> differently than a biological plant.
>
>> Sure. But they have not the same function.
>
> They both decorate a vase. How do we know when we build a chip that
> it's performing the same function that a neuron performs and not just
> what we think it performs, especially considering that neurology
> produces qualitative phenomena which cannot be detected at all outside
> of our personal experience. Maybe the brain is a haunted house built
> of prehistoric stones under layers of medieval catacombs and the chip
> is a brand new suburban tract home made to look like a grand old
> mansion but it's made of drywall and stucco and ghosts aren't
> interested.
>
>> Because all known laws of nature, even their approximations, which
>> can
>> still function at some high level, are Turing emulable.
>
> But consciousness isn't observable in nature, outside of our own
> interiority. Is yellow Turing emulable?

The experience of seeing yellow might be, although its stability will
needs the global structure of all computations.
If you believe the contrary, you need to speculate on an unknown
physics.


>
>> By computers I mean universal
>> machine, and this is a mathematical notion.
>
> I don't know, man. I think computers are just gigantic electronic
> abacuses. They don't feel anything, but you can arrange their beads
> into patterns which act as a vessel for us to feel, see, know, think,
> etc.

Neither computer nor brain can think. Persons think.
And a computer has nothing to do with electronic, or anything
physical. It is more an information pattern which can emulate all
computable pattern evolution. It has been discovered in math. It
exists by virtue of elementary arithmetic. We can implement it in the
physical reality, but this shows only that physical reality is at
least Turing universal.

>
>> That's a bad note! What is the first 5th % that you don't understand?
>
> Each sentence is a struggle for me. I could go through each one if you
> want:
>
> "I will first present a non constructive argument showing that the
> mechanist
> hypothesis in cognitive science gives enough constraints to decide
> what a "physical reality"
> can possibly consist in."

This is the abstract. The paper explains its meaning.


>
> I read that as "I will first present a theoretical argument showing
> that the hypothesis of consciousness arising from purely mechanical
> interactions in the brain is sufficient to support a physical reality.

Not to support. To derive. I mean physics is a branch of machine's
theology.


> Right away I'm not sure what you're talking about. I'm guessing that
> you mean the mechanics of the brain look like physical reality to us.

I mean physics is not the fundamental branch. You have to study the
proof, not to speculate on a theorem.


> Which I would have agreed with a couple years ago, but my hypothesis
> now makes more sense to me, that the exterior mechanism and interior
> experience are related in a dynamic continuum topology in which they
> diverge sharply at one end and are indistinguishable in another.

That's unclear.

>
>> Read just the UDA. The first seven steps gives the picture. Of
>> course,
>> you have to be able to reason with an hypothesis, keeping it all
>> along
>> in the reasoning.
>
> I'm trying, but it's not working. I think each step would have to be
> condensed into two sentences.
>
>> No, they are related to arithmetical relations and set of
>> arithmetical relations.
> Maybe that's the issue. I can't really parse math. I had to take
> Algebra 2 twice and never took another math class again. If the
> universe is made of math

The point is that the universe is not made of anything. Neither
physical primitive stuff, nor mathematical stuff. You have to study
the argument to make sense of this. So you have to accept the comp
hypothesis at least for the sake of the argument.


> I would have a hard time explaining that. Why
> is math hard for some people if we are made of math?

Well, I could ask you why physics is hard if we obey to the laws of
physics. this is a non sequitur.
Also, we are not made of math. math is not a stuffy thing. It is just
a collection of true fact about immaterial beings.


> Why is math
> something we don't learn until long after we understand words, colors,
> facial expressions, etc?

Because we are not supposed to understand how we work. The
understanding of facial expression asks for many complex mathematical
operations done unconsciously. We learn to use our brain well before
even knowing we have a brain.

>
>> God create the natural numbers, all the rest is created by the
>> natural numbers.
> Numbers create things? Why?

Relatively to universal number, number do many things. we know now
that their doing escape any complete theories. We know now why numbers
have unbounded behavior complexity. It seems to me that you can
already intuit this when looking at the Mandelbrot set, where a very
simple mathematical operation defines a montruously complex object. See:
http://www.youtube.com/watch?v=9G6uO7ZHtK8
http://www.youtube.com/watch?v=UrEoKFYk0Cs

>
>>> My focus is on describing what and who we are in the simplest way.
>>> To my mind,
>>> what and who we are cannot be described in purely arithmetic
>>> relations, unless arithmetic relations automatically obscure their
>>> origin and present themselves in all possible universes as color,
>>> sound, taste, feeling, etc.
>
>> Nice picture. This is what happens indeed.
>
> You are saying that there is an absolute ontological correlation
> between numbers and phenomenon, ie all possible spectrums begin with
> red, all possible periodic tables begin with Hydrogen - the
> singularity of the proton is immutably translated as the properties of
> elemental hydrogen in all physical universes?

Not necessarily. The structure of the proton might be more
geographical (contingent) than physical (same for all observers).
It is better to understand the reasoning by yourself than to speculate
ad infinitum of what I could say. The exact frontier between geography
and physics remains to be determined (in the comp theory). In the non
comp theory, the question cannot even be addressed.

>
>> It is in between. Because physics is not the projection of the human
>> mind, but the projection of all universal (machine (number)) mind.
> I can go along with that, although I would not limit the universal
> interior order to machine, number, or mind, but rather a more all-
> encompassing phenomenology like 'sense' or 'pattern'.


I cannot be satisfied with this, because it put what I want to explain
(mind and matter) in the starting premises.
Then I show that comp leads to a precise (and mathematical)
reformulation of the mind-body problem.

>
>>> By definition, mental phenomena are
>>> exempt from physical constraints, such as gravity, thermodynamics,
>>> etc.
>
>> They are not; even in Platonia.
>
> You're not saying that Mickey Mouse has mass and velocity though,
> right? I don't get it.

It depends on the context. Mickey Mouse is a fiction. as such it has a
mass, relatively to its fictive world. That world is not complex
enough to attribute meaning to physical attribute, nor mental one, so
that your question does not make much sense.

>
>> The complex problem is how pain are possible, and yes, I think that
>> computer science has interesting things to say here.
>
> Like what?

Like obeying to the las of qualia, where qualia are defined by what
the machine can know immediately, yet cannot prove that they know
that. It is a part of "machine's theology".

>
> There might be a bit of a language barrier.. I'm just not sure what
> you mean towards the end. Why does the universal machine pretend not
> to be a machine?

Because the machine's first person experience is related to the notion
of truth, which is a highly non computable notion.
Computationalism confronts all machines with a lot of non computable
elements. Theoretical computer science is mainly the study of the non
computable reality (of numbers).

Bruno

Craig Weinberg

unread,
Jul 14, 2011, 6:42:17 PM7/14/11
to Everything List
>The experience of seeing yellow might be, although its stability will
>needs the global structure of all computations.
>If you believe the contrary, you need to speculate on an unknown
>physics.

I don't consider it an unknown physics, just a physics that doesn't
disqualify 1p phenomena. I don't get why yellow is any less stable
than a number.

>Neither computer nor brain can think. Persons think.
>And a computer has nothing to do with electronic, or anything
>physical.

I get what you're saying, but you could put a drug in your brain that
affects your thinking, and your thinking can be affected by chemistry
in your brain that you cannot control with your thoughts. In my
sensorimotive electromagnetism schema, everything physical has an
experiential aspect and vice versa.

It is more an information pattern which can emulate all
computable pattern evolution. It has been discovered in math. It
exists by virtue of elementary arithmetic. We can implement it in
the
physical reality, but this shows only that physical reality is at
least Turing universal.

It sounds like what you're suggesting is that numbers exist
independently of physical matter, whereas I would say that numbers
insist through the experiences within physical matter.

>The point is that the universe is not made of anything. Neither
>physical primitive stuff, nor mathematical stuff. You have to study
>the argument to make sense of this. So you have to accept the comp
>hypothesis at least for the sake of the argument.

Hmm. If the universe isn't made of anything than your point isn't made
of anything either. I don't get it.

>Also, we are not made of math. math is not a stuffy thing. It is just
>a collection of true fact about immaterial beings.

Have you read any numerology?

>Relatively to universal number, number do many things. we know now
>that their doing escape any complete theories. We know now why numbers
>have unbounded behavior complexity. It seems to me that you can
>already intuit this when looking at the Mandelbrot set, where a very
>simple mathematical operation defines a montruously complex object.

The complexity is in the eye of the perceiver. Your human visual sense
is what unites the Mandelbrot set into a fractal pattern. There is no
independent 'pattern' there unless what you are made of can relate to
it as a coherent whole rather than a million unrelated pixels as your
video card sees it, or maybe as a nondescript moving blur as a gopher
might see it.

>I cannot be satisfied with this, because it put what I want to explain
>(mind and matter) in the starting premises.
>Then I show that comp leads to a precise (and mathematical)
>reformulation of the mind-body problem.

Are you more interested in satisfying your premise, or discovering a
true model of the cosmos?

>> You're not saying that Mickey Mouse has mass and velocity though,
>> right? I don't get it.

>It depends on the context. Mickey Mouse is a fiction. as such it has a
>mass, relatively to its fictive world. That world is not complex
>enough to attribute meaning to physical attribute, nor mental one, so
>that your question does not make much sense.

How does Mickey Mouse have mass? Whoever is drawing the cartoon can
make the universe he is in be whatever they want. It doesn't have to
have pseudophysical laws like gravity. He can just teleport around a
Mandelbrot set.

Craig
> simple mathematical operation defines a montruously complex object. See:http://www.youtube.com/watch?v=9G6uO7ZHtK8http://www.youtube.com/watch?v=UrEoKFYk0Cs
> ...
>
> read more »

L.W. Sterritt

unread,
Jul 14, 2011, 7:28:40 PM7/14/11
to everyth...@googlegroups.com, L.W. Sterritt
What is a "person"?  What can a "person" be but the continuos response of a wet chemical neural network to exogenous and endogenous inputs.  The response will be modified by changes in the networks chemical environment, and now we learn by strong external pulsed magnetic fields.  In a series of very relevant experiments, with readily reproduced results, subjecting certain brain regions to a pulsed magnetic field, changes the brains notions of ethics/morality, while the field is applied.  When the field is turned off, the brain returns to it's previous perceptions of the world.  The technique, Transcutaneous Magnetic Stimulation (TMS) was first developed as a noninvasive treatment for depression, being much less disruptive than ECT.  Then researchers asked, can we modify the functioning of "healthy" brains - possibly even improve functions such as memory ?

Participants in this exchange might enjoy a subscription to Nature: Neuroscience.  It's not an easy read, but interesting.

Lanny
 
It is loading more messages.
0 new messages