The structure of a self-conscious mind

7 views
Skip to first unread message

Anssi Hyytiainen

unread,
Aug 4, 2005, 5:09:28 AM8/4/05
to
How does self-consciousness emerge from the brain?

If we agree that "self-consciousness emerges naturally from the
processes ran by the brain", I have a suggestion for an answer at a
lower level. It is a suggestion for a *structure* of a process that
would logically and naturally bring forth an "experience of self-awareness."

This is a view that I've held for some time now, and even though it is
stupid simple at its core, I haven't heard of anyone expressing a
similar idea before (although as always, there probably are people with
loosely similar ideas out there). I suspect it is rare to hear about
this because our intuitive feel of "self" leads us to look from all the
wrong places, and because of our poor ability to comprehend complex
processes.

WHY IS THERE SELF-CONSCIOUSNESS?

I was reading Richard Dawkins' "The Selfish Gene" when the last piece of
this whole puzzle hit me. That book is highly recommended, if not
mandatory reading for this discussion. Or at the very least its 4th
chapter, "The Gene Machine".

It talks about the rise of "behaviour" in organisms. In order to survive
in a rapidly changing environment, animal organisms have developed
"behaviours", which at the most basic level means hard-wired reactions
to the perceptions from the environment. (This we can replicate with
simple robots)

The rise of sophisticated nervous systems (/ sophisticated behaviours)
has happened because it is useful for an organism to have a capacity to
react to things in a changing environment. And furthermore, capacity to
*predict* events in a changing environment, and react "ahead of time".
I.e. a capacity to run a "simulation" of a hostile situation inside the
head, to understand "what is about to happen" and come up with an
effective response to survive. OR the way I would put it "capacity to
apply past knowledge to novel situations through logical reasoning".
Plainly put, if you see a rock rolling down towards you, you can
effortlessly conclude it WILL crush you if you don't move away, based on
your knowledge about how things work. That's useful.

Furthermore, The Selfish Gene speculates "Perhaps consciousness arises
when the brain's simulation of the world becomes so complete that it
must include a model of itself". The book recognizes this is not a
satisfactory explanation because "it involves an infinite regress - if
there is a model of the model, why not a model of the model of the
model...?"

What we need to realize here is, that no "*model* of itself" is needed
for an experience of self-awareness. Your concept of "self" is not so
much a model of you, as it is a simple "token" for your logical
processes. You hold a declarative truth "I exist" in your memory. And
the reason you hold this declaration is because at some point of your
life, your brain has accumulated enough information about the world to
draw "I exist" as a logical conclusion. Self-awareness then, is a
"side-product" of the vast reasoning power of human brain. I say
side-product, because self-awareness itself doesn't seem to be useful
for the survival of an organism. For the most part it's just in the way.
But reasoning power IS useful, the more the better, and our brain just
couldn't help but "figure things out".

There appears to be few animals that are capable of "figuring things
out" as well, but for the most part animals simply lack the raw
reasoning power, and/or storage space for a large enough web of
associations, to come to the same conclusion. At least this guy fell far
short: http://www.compfused.com/directlink/838/ :)

ESTABLISHING SELF-AWARENESS IN BRAIN

Your brain observes patterns from the world, and builds a "web of
associations" (=your worldview) by "searching" logical relationships
between observed things. Everything you know, you must first learn. You
learn things like "wood is hard" and "sand is soft". Your worldview is
your very own idea about how things work, and relate to each others, and
it keeps improving throughout your life. Improving the worldview is
something that we also do consciously a lot, when we reason out complex
stuff like "If A and B is true, then C must also be true". And
throughout our lives, we may come to view the same things in radically
different ways.

A worldview is needed to make more accurate interpretations of the
things you see around you. Once your brain knows how things work, it can
make meaningful assumptions about the unfolding events. *Your worldview
is a reference against which everything you perceive is tested*

Eventually your brain will build such large worldview, that it can
reasonably come to a conclusion that there is such a thing as "a
perceiver" (=You) to all these things. Or another way to put it; In
order to make more accurate interpretations of the world, it is
*necessary* to store a token that represents "self" in your worldview.

And being that everything you perceive is tested against this
"worldview", which now holds the declaration that you do indeed exist,
you will necessarily become to interpretate ALL your perceptions as "I
perceived" or "I did". "I walked into the room". "I saw it on the news".

Self-awareness is simply about how your brain processes begin to
interpretate the world at some point of your life.

INFANT AMNESIA

To drive this point home, let's have a little thought experiment about
infant amnesia. The question we are trying to answer is "why can't I
remember anything from the time I was an infant?"

I think it is safe to assume that when you were an infant, you hadn't
realized yet that you exist. The moment you took your first gasp of air,
you didn't go "Oh, so here I am now". You did NOT have that declarative
truth "I must exist" at the disposal of your reasoning processes yet. At
birth, the world was just a blur of strange perceptions without any
logical relationships between things.

Try to imagine, in what form you would store your memories, if you
couldn't refer to the concept of "self". If stuff that was happening,
wasn't happening to "you", or anybody. If things would just "be". How do
you exactly handle perceptions, if you can't refer to a "perceiver"?

Try to imagine how you would have experienced your today, without
understanding you exist?

That's actually a trick question, because your days are full of
decisions that are based on the knowledge that you DO exist. Every
decision you make is anchored in one way or another to this knowledge.
If you see a car coming towards you, you understand it is YOU who gets
killed if you step in front of it. For an infant, a car is just a
strange colourful thing that makes a funny noise and gets larger. Almost
as interesting as the strange protrusions that seem to keep making a
fuss about themselves so persistently (your infant body).

Babies, and animals, can learn relationships between things. Infant
brain detects patterns through the sensory systems, and attempts to draw
logical relationships between these patterns (sounds, objects,
tastes...) Slowly a web of associations (=your worldview) is built
through rigorous attempts to puzzle everything together in logical ways.

I think it is likely that the infant brain builds and discards a number
of radically different worldviews during the first moments of life,
until there is enough information to finally put the last 1+1 together
to conclude "I must exist". It is not a "eureka" moment however. It's
more like the concept of "self" gets established inside the mind, and
from that point on the relationships between yourself and everything
else start crystallizing and getting more and more "accurate" inside
your worldview. The more you build your worldview, the more "conscious"
you get about the fact of "self existence".

From that point on, everything you experience, is stored and processed
as something that happened to "you". "I exist" becomes in many ways the
most important statement of your worldview. (Note: This suggests that in
certain conditions infant brains (and why not adult brains) could come
to conclude that a multiple number of "selves" must exist, and each
would come to hold an independent web of associations as their
worldview. I.e. a multiple personality disorder)

As an adult then, when you try to recall "what did *I* do when I was an
infant", the answer is in a very real sense "YOU didn't do anything".
The world spun around, but YOU didn't even exist. You couldn't possibly
have stored any memories in a form of "events that happened to me". Your
worldview simply didn't include you.

The furthest memory you CAN recall, is obviously some experience that
happened to YOU. Therefore, your brain had already figured out your
"self" at that point. You can't remember anything from before this,
because it would be - for example - some strange knowledge about a
relationship between "a familiar voice pattern" and "mom", without any
associations to when "it" happened and to whom. (It is very hard to
imagine how past events are stored when self-awareness is not involved)

So, if you think you DID exist when your body was an infant, but you
just don't remember, you have placed unwarranted importance to your
"self". You are not your brain tissue, just like a computer program is
not the processor. You are not even the electric impulses going around
your brain, just like a computer program is not the electric impulses
inside a processor.

You are *a process* with appropriate memories & knowledge at its
disposal. Just like a computer program is a process with appropriate
data at its disposal. Or, given that the process itself is exactly the
same for all of us (with marginal differences), YOU are nothing but your
memories (albeit the actual experience of "self" only springs once you
can also process these memories). You - as a person - came to be, when
your brain finally realized you ARE.

Consider this: Suppose that all your memories (worldview, long-term
memory, procedural memory...) could be erased and someone else's - say,
John's - memories could be copied into your brain. Would you still be
yourself? Or would you "think" you are John?

The answer to both questions is a big NO! Not a trace of you would be
left. Effectively, this new person would become to use your "hardware"
(and would be pretty confused about why his body changed all of a
sudden). YOU WOULD NOT EXIST IN ANY FORM AT ALL. You were just killed.

Notice that the above paragraph includes another dangerous
misconception. I said this other person "would be pretty confused about
why his body changed all of a sudden". This does not suggest that a
person, living somewhere else, would all of a sudden have an experience
of having been "teleported" into your body. None of this copying stuff
would effect the source of the copy at all. "John" would carry on living
as usual. But at the moment that your brain, the hardware, would begin
running its logical reasoning processes according to the "data" of
"John", another John would come into existence, completely convinced
that he is the original John, only sitting in a wrong body. This "Other
John" WOULD have a recollection of suddenly having been "teleported"
into another body.

ARTIFICIAL CONSCIOUSNESS

Again, if we agree that there is nothing supernatural or mysterious
about self-consciousness, then a man built machine can also "become"
self-conscious (shouldn't it be obvious by now, being that we are
basically machines ourselves). But designing a self-conscious AI might
be a bit tricky. First we'd need to figure out the details regarding how
the mind emerges from the brain processes. The state we are in with
modern brain science is much like trying to figure out how "Microsoft
Flight Simulator" works by shooting the CPU with a thermal camera.
Anyone familiar with computer programming and CPU structures can surely
see the magnitude of the problem here. Except with the case of brain
research, the problem is even bigger due to:

1. It appears that different people have come to use their "hardware" in
different ways for the same purposes. There are case examples of people
who have, upon damage to left hemisphere, learned to use their right
hemisphere for language processing.

2. Emergent processes are notoriously unintuitive and difficult to
reverse-engineer, and the level of emergence and parallelism is likely
to be much higher in human brain processes, than with any processes
designed by human consciousness (E.g. MS Flight Simulator)

3. There is no evolutionary pressure for simple, comprehensible logical
structures for the brain processes, but there IS a strong pressure for
optimized speeds (of "software" and "hardware"). Complex processes that
are optimized for speed usually translate into utterly incomprehensible
logical structures. The processes ran by the brain are likely to be
logistical spaghetti on every imaginable level of abstraction. (Without
any explicit abstraction layers actually existing)

And even if we find something like quantum mechanics to play a part in
the brain processes, it will not explain consciousness. It will only
make us see that the brain processes are even more complicated than we
thought. It will bring us one step closer to the truth by revealing the
full magnitude of the problem.

So what can we do? The day we can *design* a self-conscious artificial
intelligence is not in sight. But I suspect strongly, that the day we
*create* a first self-conscious AI, is not that far in the horizon.

We can simulate evolution, and through evolution process a
self-conscious artificial intelligence can emerge way before we
understand much about the details of a self-conscious process. The first
self-conscious artificial intelligence system will be built by "growing"
one in a virtual environment, through a specifically devised
evolutionary process (That's where we need the knowledge about the
likely structure of a self-conscious mind).

And when the "potentially self-aware system" has "grown", the AI still
needs to "learn" itself that it exists (For example, it really needs to
"exist" concretely inside a world, and the world needs to be responsive
in ways that gives hints of self "existence"). It needs to build the
worldview itself, and it needs to "live" in such a "world" inside the
computer where it can come to a conclusion about itself.
Reverse-engineering the result will bring us much insight about how we
work ourselves. Although I suspect reverse-engineering will be almost as
difficult with such AI as it is with human brain, because of the same
reasons I stated above. But at least we can exclude unknowns in the laws
of nature (the laws of computers are explicit). Plus the computer AI
doesn't need mature processes for things like deciphering complex
information from the sensory systems.

Yet there will be heated debates whether or not such AI actually *IS*
self-aware or just claim it is. Again, because people are placing
unwarranted importance to "self". Even when the artificial processes
will be 1:1 with human brain processes, there will be people who will
not be convinced. Hell, there probably are people who aren't even
convinced of OTHER PEOPLE being self-conscious.

In any case, I applaud efforts like Avida http://dllab.caltech.edu/avida/

Can a human mind be copied into a synthetic brain then? Basically, yes.
But if you were to just copy your mind into a machine, you wouldn't
experience being shifted in there. You would still exist inside your
body. The only way for YOU to actually have an experience of "shifting"
into a computer - I presume - is to actually convert your brain into a
computer piece by piece. (Although, if you just do a plain copy of your
mind into a machine, the COPY of you would have an experience of being
shifted, and it is debatable if this is actually enough. After all the
copy is a self-conscious being just as much as you are, and technically,
nothing makes you more of a "you" than he/she is)

-Anssi


Sir Frederick

unread,
Aug 4, 2005, 5:37:50 AM8/4/05
to
'Consciousness' and 'mind' are false folk theories.
You've got to do better than explain how the angels are
dancing on the pin head.

Nice try otherwise.
--
Best,
Frederick Martin McNeill
Poway, California, United States of America
mmcn...@fuzzysys.com
http://www.fuzzysys.com
http://members.cox.net/fmmcneill/
*************************
Phrase of the week :
"What are subjective
experiences? Are they magic?
What is magic?
Are they delusions? What are delusions?"
-- Me (1937 - ...)
:-))))Snort!)
*************************

SpiKe

unread,
Aug 4, 2005, 6:33:16 AM8/4/05
to
"Anssi Hyytiainen" <ans...@nic.fi.ANTISPAM> wrote in message
news:b3lIe.3054$ae3....@reader1.news.jippii.net...

Read L. Vygotsky.

http://tip.psychology.org/vygotsky.html

http://www.kolar.org/vygotsky/

And nice clip by the way.


forbi...@msn.com

unread,
Aug 4, 2005, 9:07:40 AM8/4/05
to
All natural properties emerge from natural processes.
This doesn't mean all properties have material indpendent
implementations.

We needn't agree consciousness can emerge from processes happening
in computers just because we can agree that consciousness emerges
from processes happening in brains. If consciousness cannot
emerge from processes happening in computers then neither can
self-consciousness.

Some will assert consciousness is an ascription to a set of behaviors.
I don't think so but even if I'm wrong it's merely a labelling problem.
The property I refer to when I use the word consciousness still exists
even though I might be inappropriately using the word. As it turns
out I'm less concerned about computers replicating my behaviors than
with this property to which I refer when I use the phrase "my
consciousness".

The difficulty in proceeding forward in a scientific way concerning
that which I reference when I use the phrase "my consciousness" is
that it is tied to processes happening in my brain.

Suppose someone were to assert a computer running a special program
could have "consciousness" as I mean it and the proof was to have
my consciousness copied to the computer and after some period have
it copied back into my brain where I could tell by introspection
that I had consciousness the whole time. Well, I wouldn't accept
this a proof of consciousness during the period processes were
running outside my brain since I wouldn't have access to this
period directly. I would only have access to consciousness
occurring during the access of "memories" caused by structural
changes to my brain made during the "copying" process. My
"consciousness" occurrs in the eternal "now". I don't even have
access to prior instances in order to compare them angainst anything.

I don't have a very good framework in which to discuss these things.

Consider for a moment "pain". I hope you have a pretty good idea
as to what I am referring even though I have no way to validate it.
Now if I were to accept your assertion that the processes causing
the qualia of "pain" in me could be replicated in a computer in
such a way that "pain" would exist in the computer when it was
engaged in the right process (running the right program) then why
should I not be concerned that these processes were not happening
in computers all over the world right now? Certainly it couldn't
be because the computers weren't exhibiting the right external
behaviors since this would invalidate the assumption that "pain"
emerged from the right process happening inside the computer.
"Pain" would be a strange attribution on your part if I experiened
it only because you labelled it thusly in your brain.

I've got to go. I wish I had more time to express my concern right
now but I do not. None the less, unless I have communicated some
portion of what I'm discussing I'm not sure more word will be helpful.

SpiKe

unread,
Aug 4, 2005, 10:36:54 AM8/4/05
to

"Anssi Hyytiainen" <ans...@nic.fi.ANTISPAM> wrote in message
news:b3lIe.3054$ae3....@reader1.news.jippii.net...

Read L. Vygotsky.

Sir Frederick

unread,
Aug 4, 2005, 1:45:06 PM8/4/05
to

Consider 'redness'. The experience of it is a quale.
An internal representation of electromagnetic wave wavelength,
projected to be 'out there' for evolutionary useful reasons.

Then consider 'consciousness'. The experience of it is a quale.
An internal representation of brain activity,
projected to be 'in there' for evolutionary useful reasons.

forbi...@msn.com

unread,
Aug 4, 2005, 9:25:27 PM8/4/05
to
red isn't just "an internal representation ..." it is the
experience of a particular internal representation, at
least in the individual case. Each individual may
have a unique experience associated with a unique
representation (physical structure and state(transition(?)).

While we might project, for evolutionary reeasons,
the experience as a representation of something,
anything, the experience exists quite apart from
our use of it.

Brain in a vat.

We might choose for our own reasons to intercept
the signals going into and out of a particular brain
and put the brain to some use other than that for
which it was evolved. The state transitions happening
in the brain wouldn't allow it to distinguish between
its disconnectedness from the original envirnment
and its replacement environment unless there are
problems with our original to replacment mapping.
The emergence of red qualia could have a completely
different source not associated with certain eletromagnetic
waves wavelengths. We, on the outside, might know
the difference but such knowledge needn't be communicated
to the brain in the vat.

Sir Frederick

unread,
Aug 4, 2005, 9:59:33 PM8/4/05
to

Agreed.
As in synesthesia, about which I have posted here several times.

Anssi Hyytiainen

unread,
Aug 5, 2005, 11:01:03 AM8/5/05
to

Interesting... It seems like his ideas are partially parallel to mine
but also partially cover different areas of the "consciousness problem".

This makes me think though, if so called "feral children" show signs of
self-consciousness before learning anything about the human world (My
view would imply they do, but in vague ways because they couldn't
communicate themselves to the outside)

Unfortunately, as far as I know all the feral children cases are so
poorly documented that we don't know which parts of them are true and
which parts folk stories. :(

-Anssi

Anssi Hyytiainen

unread,
Aug 5, 2005, 11:08:29 AM8/5/05
to
Sir Frederick wrote:
> 'Consciousness' and 'mind' are false folk theories.

I would reply that I agree, but I'm not sure how you mean that... The
way I do agree is that "consciousness" and "mind" are in technical sense
completely different than what they appear to us intuitively. I.e. in my
view there is no "theater of mind" or "internal representation of
perception", or anything else that could be identified as uniquely "you"
inside your brain processes. Except MAYBE in some ways your
"experiences" (incl. your corresponding "worldview") could make up "you".

But then I am also afraid these kinds of discussions easily slide into
semantics. Everything I said above don't mean anything unless you
remember what meaning I placed on "worldview", and what I mean with
something being "you", and even then it is probably way too vague...
Heh, but what can we do? :)

-Anssi

Anssi Hyytiainen

unread,
Aug 5, 2005, 2:55:00 PM8/5/05
to
Thanks for the reply :)

forbi...@msn.com wrote:
> All natural properties emerge from natural processes.
> This doesn't mean all properties have material indpendent
> implementations.
>
> We needn't agree consciousness can emerge from processes happening
> in computers just because we can agree that consciousness emerges
> from processes happening in brains.

Hmmmm, why is that? I mean, material properties can also be simulated
with a computer. People intuitively think of "machine brain" as
something like a piece of hardware built out of "synthetic synapses",
but more appropriate view could be "virtual brain tissue" which holds
the same properties as the organic counterpart.

Maybe I shouldn't say we could build a self-aware "machine". Better to
say we could build a self-aware "process". The hardware running that
process can be similar to today's computers, and the process itself can
operate in a completely virtual environment, including virtual brain
tissue, if needed be.

Although my experience about emergent processes would suggest that the
organic matter is in no way important for building 1:1 *logical*
counterpart of human brain processes. I.e. There is always certain
critical logics that MAKE the process, but the lower level
implementation of such logics can always be done in many different ways,
with the exact same end results. And in this case the human brain
synapses are at the very bottom level of abstraction, and it is very
unlikely that it would be in any way critical for the higher level
logics that spring conscious experience.

Much the same way how it doesn't matter for a Java app if you are
running a "Windows XP on an IBM PC with hyperthreading Intel P4 CPU", or
a "Mac OS X on a Power Mac with two G5 CPUs". There are simply so many
abstraction levels in between that the Java logics need not "have" any
information of the lowest level processes at all.

That is why it is so difficult to derive the logics of human
consciousness by observing the very lowest level brain activity.

Of course none this can be proven just yet, but I can't see any reason
(anymore) to suspect things to be otherwise. And not least because it
would hold us back when trying to think about how "a logical process
could spring mind the way we experience it".

> The difficulty in proceeding forward in a scientific way concerning
> that which I reference when I use the phrase "my consciousness" is
> that it is tied to processes happening in my brain.
>
> Suppose someone were to assert a computer running a special program
> could have "consciousness" as I mean it and the proof was to have
> my consciousness copied to the computer and after some period have
> it copied back into my brain where I could tell by introspection
> that I had consciousness the whole time. Well, I wouldn't accept
> this a proof of consciousness during the period processes were
> running outside my brain since I wouldn't have access to this
> period directly. I would only have access to consciousness
> occurring during the access of "memories" caused by structural
> changes to my brain made during the "copying" process. My
> "consciousness" occurrs in the eternal "now". I don't even have
> access to prior instances in order to compare them angainst anything.

I'm glad you made that point. Your example about your consciousness
being copied into a brain and then back to you reveals the all too
common pitfall that we tend to fall into when thinking about
self-consciousness. :(

Because your experience of consciousness is a manifestation of logical
processes, it is not anything tangible at all that could be copied back
and forth. There is no identifiable "you" that could be in any way
detached from your worldview and your "memories".

Only the process running in your brain, including the appropriate data
(worldview and "memories") can be copied, but that doesn't mean the
process running inside your brain would be affected at all by the
copying. It cannot be affected. Instead, in such a hypothetical
situation there would be a "new you", with its own consciousness, inside
the computer. (Depending on the virtual sensory systems, the perceptions
of the "virtual world" would likely be very different to your conscious
new self. He would probably feel like on LSD ;)

If that sounds way stupid, here's a new thought experiment I cooked up
(this actually applies to any non-supernatural self-consciousness, and
it still keeps blowing my own mind):

---
Suppose that we had the hardware to teleport people; You go into a
"source" pod, and the machine would read all the data and electrical
state of our body. It would then disassemble your body into little
particles, and send complete information about your body to a
"destination" pod, which would re-assemble an *exact* copy of you. And
it would in fact work like a charm.

Now, suppose you have been using such a machine every day to get to work
conveniently. Everything has been going smoothly, until one day, the
SOURCE pod malfunctions; After sending your data forward, it would fail
to disassemble your body. Yet, the destination pod would build a "new
you" at the destination. Would this mean you'd experience being in two
different places simultaneously? No. You would just be sitting in the
source pod, wondering why nothing happens. And the "other you" would
carry on with his day as usual.

Suppose that you, the original, would find out what happened. Would you
then fix the machine, and get yourself destroyed anyway? I.e. would you
commit a suicide to make sure there are no extra copies of yourself?
After all, that's what "you" have been doing every day of your life!

Daily tip: If you come across such a teleportation machine, DON'T USE
IT! It just kills you, that's all. (Try to explain that to someone who's
been using teleportation "succesfully" every day :)

The copy that springs from the destination pod is another you, with an
additional experience of "having been teleported" ("I was there, but now
I'm here"). He would be a self-aware human being just like you, and he
would be absolutely convinced about being the same person that entered
the source pod, and everything would carry on normally in the world.

Yet the original you, having entered the source pod, would at a normal
day just have an experience of "being killed" (Well, you wouldn't
actually be recalling any experiences anymore). Being that you
teleported to work every day, "you" were basically a new guy, ever day.
And killed every day. Again and again. The very original "you" is long
gone into oblivion, and would never know anything about this.

Conclusion; One could teleport any other object or organism without a
hiccup, but not a self-conscious one. That would entail a murder. This
conclusion is completely logical, but very strange at the same time. At
least for anyone who holds "I" rooted at the very base of their
worldview. (Which means all of us :)
----

I really hope you can interpretate the teleportation thought experiment
correctly, and really think about what it means for our views of
self-consciousness. There is a very real sense how you are a different
person every time some data changes inside your brain. What you are, is
your worldview, your memories and ability to remember.

If you were to loose all your long term memories permanently, you would
still be able to interpretate world and have a conscious experience
because of your intact worldview. But if you were to loose ALL of your
worldview as well, you would stop "being" as a person because your brain
wouldn't be able to intepretate anything you experience through sensory
systems. You'd be like an infant. Your brain would need to build a new
worldview from scratch, again. Eventually your "brain" concluded - again
- that you do indeed exist, and you would come to interpetate the world
in a conscious sense. But you would also in a very real sense be a
completely different person (and you would likely also seem like a very
different person to everyone around you).

You experience "being", because your brain intepretates, stores and
recalls information that way. How it does this in a logical sense, was
outlined in "ESTABLISHING SELF-AWARENESS IN BRAIN" in the original post.

That is btw NOT to say the original post is proven to be correct, but it
is to say that if we built an artificial self-awareness, we would
achieve the exact same "kind" of self-awareness that we experience
ourselves through the kind of structure that was explained. Which would
strongly suggest there isn't anything "extra" in our brain processes, as
it isn't needed. (Read on, I have more to say about this on your
question about the pain)

> I don't have a very good framework in which to discuss these things.

Oh I feel exactly the same way myself... :( Sometimes I really struggle
when trying to communicate these things properly. But what can we do? I
can only assure you I am really trying to understand what you are
saying, but it is always too easy to mis-interpretate others on such a
subjective matter...

> Consider for a moment "pain". I hope you have a pretty good idea
> as to what I am referring even though I have no way to validate it.
> Now if I were to accept your assertion that the processes causing
> the qualia of "pain" in me could be replicated in a computer in
> such a way that "pain" would exist in the computer when it was
> engaged in the right process (running the right program) then why
> should I not be concerned that these processes were not happening
> in computers all over the world right now? Certainly it couldn't
> be because the computers weren't exhibiting the right external
> behaviors since this would invalidate the assumption that "pain"
> emerged from the right process happening inside the computer.
> "Pain" would be a strange attribution on your part if I experiened
> it only because you labelled it thusly in your brain.

Excellent, excellent question. My theory has a good answer to that, I
hope I can also communicate it properly. :)

First, I hope it is ok to turn your concern into a form of:
"We can already replicate "pain" in computer programs, such as (for
example) a virtual character feeling pain and reacting appropriately,
just like a real person. But yet it doesn't mean the response was
conscious. What would it mean to have a conscious response and how could
we know we saw a conscious response?"

I am unsure if that is how you would put it, but in any case there is
one VERY important difference between our conscious pain experience, and
a "computer pain experience" as we know it.

All the AI agents that have been built so far, are being fed with some
explicit information about the world they operate in, and explicit ways
to react to things. Computers just work in explicit ways. When we design
artificial intelligence, we create explicit behaviours for explicit
"inputs" (perceptions). Even if we build a very clever AI agent that can
handle a wide variety of situations by applying little nuggets of
explicit knowledge to solve novel situations, it doesn't mean it to have
a conscious experience.

What is required for a conscious experience of such an AI agent, is that
it isn't told ANYTHING about the world it lives in. The agent needs to
be armed with:
1. Capacity to perceive patterns from the world it lives in through some
kind of sensory system.
2. Capacity to "parse" the patterns into logical framework of knowledge.
It needs to have logical reasoning power so it can slowly "learn" what
everything it is perceiving "means". It needs to make its OWN
assumptions about all the "alien perceptions".
3. Capacity to interpretate the world according to the knowledge it
builds at step 2

And if such AI agent is powerful enough at those three things, (and it
didn't live in an empty and static world) it should be able to
eventually pick up enough clues and make a reasonable assumption that it
probably does "exist", and from that point on come to interpretate
everything that way.

Let's still mention that *designing* such a logical process is almost
impossible (ask the AI people and they will strangle you), but methods
like those used in Avida hold promise as a tool for realizing such a
system. There was a fascinating article about Avida at Discovery
magazine, February 2005. (The little Avida fellas are already tricking
the creators by playing dead when they are being probed, if there is an
evolutionary pressure for such a thing ;)

Also note that such an AI agent cannot even be told to DO any tasks
explicitly. It requires that the AI chooses its own goals and actions
from its own assumptions of the world and what it feels like it should
do. I.e. it would have its own will, just the way we do. (I.e. we could
probably pressure the poor guy to bend to our will anyway ;)

And btw, I mean all those three steps in exactly such way as you
experience them NOW, as a conscious being. You know you perceive
patterns from the world. You know you apply your existing knowledge
about how things work to interpretate what you perceive. (Kids may think
there are little people living inside TVs, until they have enough
knowledge to interpretate what they see more correctly)

I think it is also safe to assume that animals learn the same way as
babies, even if they don't have enough power to figure themselves out
eventually (I.e. they build their worldview without similar
consciousness to it as we have as an adult). And much the same way,
sometimes the way the animals interpretate the world with their limited
logical abilites, seem pretty absurdly stupid to us. Like these cats:
http://www.compfused.com/directlink/833/ (I'm sorry, I just love these
clips ;) Also it probably tells us something how different animals
interpretate their own mirror images.

> I've got to go. I wish I had more time to express my concern right
> now but I do not. None the less, unless I have communicated some
> portion of what I'm discussing I'm not sure more word will be helpful.

Always... :) I may be slow to answer sometimes though. And I must
apologize that my post is very long, but I think this view of mine is
very powerful in thinking about the hard and easy problems of
consciousness, if you can really grasp them... Oh, and I intend to make
this post a bit longer, because I think it is crucial to understand the
evolutionary path in nature to such brain processes as I've explained:

Let's imagine a simple amoeba like organism, for which it is harmful to
stay in direct sunlight. The individuals that move to a random direction
when they perceive direct sunlight, are more efficient survival machines
than the ones that simply stay put. (So the moving behaviour persists)

Like simple pre-programmed robots, the organisms are fleeing away from
moving lights. For uninformed, the behaviour could give the false
impression that the organisms are making conscious decisions to stay in
shadow. Yet, the behaviour simply arises from a specific organization of
matter, and laws of nature.

In nature, a run-away evolutionary process of improved survival tactics
would occur in our community of "light-evaders". The individuals with
faster movement capability would be more efficient survival machines.
The individuals with better light/shadow perception capabilities would
be more efficient survival machines. The individuals that could perceive
shadows from a distance would be far better off than those running
around randomly with trial & error.

In addition to improved mobility and improved perception, it makes a
better survival machine to have improved behaviour patterns. For
example, the individuals that tend to move towards larger patches of
shadows rather than smaller, will also tend to have shelter for longer
periods of time. Over time, the behaviour patterns may become very
sophisticated, and the organisms will become amazingly skillfull in the
game of "stay in a shadow". And even then, the only thing they are
"aware" of is light. They need not be aware of anything else in the
environment. They aren't aware of themselves. They aren't aware of each
others. They aren't even aware of the ground they are resting on
(assuming they don't have a sense of touch, i.e. no "input signal" about
the ground itself at all).

Because organisms fight for survival against other organisms, the
evolution becomes an arms race towards more and more sophisticated
nervous systems to produce more and more efficient behaviours to cope
with more and more complex situations, against more and more complex
organisms. Every organism is trying to "out-trick" every other organism.
From the point of view of the evolution, it is "tuning" the nervous
systems into very sophisticated machines. As the behaviours become more
sophisticated, layers of "abstractions" start to build inside the brain
processes. Things like "fear" and "desire" emerge as abstractions to the
brain processes.

Eventually, the nervous systems evolve to a point of being capable of
performing logical reasoning. Logical reasoning emerges because it is
helpful to be able to "predict" the events, to avoid obvious dangers
that lurk ahead. (Rock rolling towards you is not good)

Throughout the life of such a sophisticated organism, its brain
recognizes patterns coming from the sensory systems, and generates
logical associations between things through logical reasoning. That is,
stores basic information of what is known to be true about this or that,
and what relationships things have with each others -> a worldview. It
needs it to interpretate complex patterns more easily. Upon building a
large enough worldview, "self-awareness" emerges.

Furthermore, consider this; if there was evolutionary pressure for it,
all of us would hold multiple independent personalities inside our
brains, and it would be completely natural. For example, the brain might
keep switching between different personalities according to the
situation, and each of "you" would have an experience of "living" a
short period of time, and then being jumped to a completely different
period of time (i.e. to the next time this personality is "needed"). The
different personalities might eventually knit together the evidence from
the surroundings about the existence of other personalities though.

Oh, and now you still need to read the 4th chapter of "The Selfish Gene" ;)

(Again, sorry for long post. Won't be this long from this point on...
And now I won't be here to reply in the next few days so take your time
guys :)

-Anssi

Sir Frederick

unread,
Aug 5, 2005, 3:23:53 PM8/5/05
to

I do several things :
1. Keep reading (reading "On Intelligence", subscribe to
several 'Science' mags)
2. Challenge myself to figure out how to build a
true machine intelligence, with 'consciousness' and qualia.
3. Resist the practice of magic in any understanding.
4. Observe the many existence proofs around in myself and
others.
5. Find the riddle entertaining enough to hold my attention.
6. Get pissed at any putative God that would set this
situation up.
7. Get pissed at myself at not solving the riddle.
8. Finding partial answers to the riddle disgusting.
9. Being a misanthropist such that any paradigm shift is
welcome.
10. Patronize all cultures, canonic and historic.

11. etc.

andy-k

unread,
Aug 5, 2005, 3:37:31 PM8/5/05
to
> "What are subjective
> experiences? Are they magic?
> What is magic?
> Are they delusions? What are delusions?"

Delusions are subjective experiences.


Sir Frederick

unread,
Aug 5, 2005, 3:56:24 PM8/5/05
to

What is it that 'delusions' are taken to be 'reality',
while many other subjective experiences are not?


--
Best,
Frederick Martin McNeill
Poway, California, United States of America
mmcn...@fuzzysys.com
http://www.fuzzysys.com
http://members.cox.net/fmmcneill/
*************************
Phrase of the week :

"What are subjective
experiences? Are they magic?
What is magic?
Are they delusions? What are delusions?"

andy-k

unread,
Aug 5, 2005, 4:04:11 PM8/5/05
to
>>> "What are subjective
>>> experiences? Are they magic?
>>> What is magic?
>>> Are they delusions? What are delusions?"
>>
>>Delusions are subjective experiences.
>>
> What is it that 'delusions' are taken to be 'reality',
> while many other subjective experiences are not?

Delusions are subjective experiences that conflict with the opinions of the
majority?


Sir Frederick

unread,
Aug 5, 2005, 4:21:14 PM8/5/05
to

IMO there is no 'opinion' or 'opinions'. Delusions are individual and private
(such as qualia). The experience may be reported,
but no deeper than that.


--
Best,
Frederick Martin McNeill
Poway, California, United States of America
mmcn...@fuzzysys.com
http://www.fuzzysys.com
http://members.cox.net/fmmcneill/
*************************
Phrase of the week :

"What are subjective
experiences? Are they magic?
What is magic?
Are they delusions? What are delusions?"

Stu

unread,
Aug 5, 2005, 11:15:11 PM8/5/05
to
On 2005-08-04 06:07:40 -0700, forbi...@msn.com said:

> Suppose someone were to assert a computer running a special program
> could have "consciousness" as I mean it and the proof was to have
> my consciousness copied to the computer and after some period have
> it copied back into my brain where I could tell by introspection
> that I had consciousness the whole time. Well, I wouldn't accept
> this a proof of consciousness during the period processes were
> running outside my brain since I wouldn't have access to this
> period directly. I would only have access to consciousness
> occurring during the access of "memories" caused by structural
> changes to my brain made during the "copying" process. My
> "consciousness" occurrs in the eternal "now". I don't even have
> access to prior instances in order to compare them angainst anything.
>
> I don't have a very good framework in which to discuss these things.

You may want to google Philosopher John Searle. He is a professor at
Berkley who does set up a good framework to discuss these issues.
--
~Stu

Stu

unread,
Aug 5, 2005, 11:45:18 PM8/5/05
to

Imagine this future scenerio:

A tumor is taking over your hippocampos, the doctors replace this
section with silicon chips that take over the duties of the hippocampos.

Then the thalamus start to rot. Doctors replace that area with a chip.

This continues until all that is left of the central nervous system is
a synthetic brain.

The are three possible outcomes:

1. The individual continues to have consciousness. And intentionality
(in the Husserlian sense) is carried out. I want to lift my arm - the
arm lifts

2. The individual continues to have consciousness but can not longer
control their body. The body functions are carried out by the machine
- there are cases of this when brain surgeons are able to stimulate
muscle movement by sparking neural pathways.

3. The individual looses consciousness and the computer brain has
taken over completely.

I vote for 3. You can look at my other posts to understand why.


(snip) long allegory


>
>
>> I don't have a very good framework in which to discuss these things.
>
> Oh I feel exactly the same way myself... :( Sometimes I really struggle
> when trying to communicate these things properly. But what can we do? I
> can only assure you I am really trying to understand what you are
> saying, but it is always too easy to mis-interpretate others on such a
> subjective matter...

Then you should do some reading. Start here
http://www.ecs.soton.ac.uk/~harnad/Papers/Py104/searle.comp.html


>
>> Consider for a moment "pain". I hope you have a pretty good idea
>> as to what I am referring even though I have no way to validate it.
>> Now if I were to accept your assertion that the processes causing
>> the qualia of "pain" in me could be replicated in a computer in
>> such a way that "pain" would exist in the computer when it was
>> engaged in the right process (running the right program) then why
>> should I not be concerned that these processes were not happening
>> in computers all over the world right now? Certainly it couldn't
>> be because the computers weren't exhibiting the right external
>> behaviors since this would invalidate the assumption that "pain"
>> emerged from the right process happening inside the computer.
>> "Pain" would be a strange attribution on your part if I experiened
>> it only because you labelled it thusly in your brain.
>
> Excellent, excellent question. My theory has a good answer to that, I
> hope I can also communicate it properly. :)
>
> First, I hope it is ok to turn your concern into a form of:
> "We can already replicate "pain" in computer programs, such as (for
> example) a virtual character feeling pain and reacting appropriately,
> just like a real person. But yet it doesn't mean the response was
> conscious. What would it mean to have a conscious response and how
> could we know we saw a conscious response?"

That is because you are forgetting that computer programs can only
follow instruction sets. They can not divine meaning.

Consciousness is capable of semantic interpretation. (this is the
Chinese Room argument. google it)

> I am unsure if that is how you would put it, but in any case there is
> one VERY important difference between our conscious pain experience,
> and a "computer pain experience" as we know it.
>

That is because all the computer can do is process data. You hit the
computer's mouse hard and it can be formally programed to respond by
saying "ouch".

But it does not have the experience of pain and then interpret that
experience in such a way that there is a visceral response as well as a
deeper cognitive response like perhaps I should not inflict pain on
other people because I do not like who this feels to me.

[
> [snip AI will never amount to anything more than a machine]


I've got to go. I wish I had more time to express my concern right
> now but I do not. None the less, unless I have communicated some
>> portion of what I'm discussing I'm not sure more word will be helpful.
>>

[snip comparing IA to simple organisms yawn]

>
> Throughout the life of such a sophisticated organism, its brain
> recognizes patterns coming from the sensory systems, and generates
> logical associations between things through logical reasoning. That is,
> stores basic information of what is known to be true about this or
> that, and what relationships things have with each others -> a
> worldview. It needs it to interpretate complex patterns more easily.
> Upon building a large enough worldview, "self-awareness" emerges.
>

Thats is a big leap. From what the Buddhists call the "monkey brain"
to self awareness. It suggests that there is a complexity to the vast
experience of human consciousness that isn't going to be answered by
silicon chips and microscopes.


> Furthermore, consider this; if there was evolutionary pressure for it,
> all of us would hold multiple independent personalities inside our
> brains, and it would be completely natural. For example, the brain
> might keep switching between different personalities according to the
> situation, and each of "you" would have an experience of "living" a
> short period of time, and then being jumped to a completely different
> period of time (i.e. to the next time this personality is "needed").
> The different personalities might eventually knit together the evidence
> from the surroundings about the existence of other personalities though.
>

But we do alter states of consciousness for different events. We have
alert states, pathological states, stoned states, religious states,
emotional and so forth.

Consciousness comes in many colors.

We manufacture this imaginary structure called self as a defense mechanism.


Oh, and now you still need to read the 4th chapter of "The Selfish Gene" ;)
>
> (Again, sorry for long post. Won't be this long from this point on...
> And now I won't be here to reply in the next few days so take your time
> guys :)
>
> -Anssi
>

> You may be better served by putting away the books that compare the
> brain to a computer and start looking at art, poetry, music in order to
> better understand your self as a conscious being.

Of the two alternatives, what philosophy is going to make you most
worthy to call yourself a thinking human being?
--
~Stu

Stu

unread,
Aug 5, 2005, 11:54:16 PM8/5/05
to
On 2005-08-04 07:36:54 -0700, "SpiKe" <no-...@home.com.invalid>> said:

>> INFANT AMNESIA
>>
>> To drive this point home, let's have a little thought experiment about
>> infant amnesia. The question we are trying to answer is "why can't I
>> remember anything from the time I was an infant?"
>>
>> I think it is safe to assume that when you were an infant, you hadn't
>> realized yet that you exist. The moment you took your first gasp of air,
>> you didn't go "Oh, so here I am now". You did NOT have that declarative
>> truth "I must exist" at the disposal of your reasoning processes yet. At
>> birth, the world was just a blur of strange perceptions without any
>> logical relationships between things.

You may want to read the groundbreaking research on child development
by Piaget in the 1930's.

Infants don't develop a sense that there is difference between
themselves and the outside world until 6-8 months (the mirror stage).
After that, much of their development is centered on creating an
imaginary layer over reality. The separation of self and other,
separation of self and thing, hot and cold, light and dark and so on.
By 2 an infant begins to add a layer to further complicate matters, the
layer of the symbolic--Language.

What possible referent would we have to remember anything from this period?
--
~Stu

Anssi Hyytiainen

unread,
Aug 9, 2005, 3:35:37 AM8/9/05
to
Thanks for the reply!

Stu wrote:
> Imagine this future scenerio:
>
> A tumor is taking over your hippocampos, the doctors replace this
> section with silicon chips that take over the duties of the hippocampos.
>
> Then the thalamus start to rot. Doctors replace that area with a chip.
>
> This continues until all that is left of the central nervous system is a
> synthetic brain.
>
> The are three possible outcomes:
>
> 1. The individual continues to have consciousness. And intentionality
> (in the Husserlian sense) is carried out. I want to lift my arm - the
> arm lifts
>
> 2. The individual continues to have consciousness but can not longer
> control their body. The body functions are carried out by the machine -
> there are cases of this when brain surgeons are able to stimulate muscle
> movement by sparking neural pathways.
>
> 3. The individual looses consciousness and the computer brain has taken
> over completely.
>
> I vote for 3. You can look at my other posts to understand why.

Assuming the synthetic parts are functionally identical to the organic
ones, I'm afraid I'm gonna have to go with 1.

I assume your view is somewhat similar to that of John Searle. I can
assure you that his points are well understood at this end. However,
read on...

>> Oh I feel exactly the same way myself... :( Sometimes I really
>> struggle when trying to communicate these things properly. But what
>> can we do? I can only assure you I am really trying to understand what
>> you are saying, but it is always too easy to mis-interpretate others
>> on such a subjective matter...
>
> Then you should do some reading. Start here
> http://www.ecs.soton.ac.uk/~harnad/Papers/Py104/searle.comp.html

Being that most people don't really seem to understand the problem
between explicit instructions and semantic interpretation, his arguments
are very well placed and not to be disregarded. However, the point of my
post can be seen as a direct extension to his view, which turns the
whole thing topsy-turvy. I already explained how semantic interpretation
(exactly as we know it) arises from explicit set of instructions.

Now that I know more about how you view the issue, I think I can
translate my point better into your language. What I meant with
difficulties to communicate was just the problem with semantics. People
are limited to interpretate incoming communication according to their
unique worldview, and everything one says is misunderstood to an extent,
because of different supporting ideas and concepts in their worldview.

One way to put it is "Human communications cannot be understood; only
interpretated. But computer communication can ONLY be understood; there
is no room for interpretation." This is analogous to Searle's main point
about semantics vs. syntax.

Now, like I said before, the fallacy of imagining a synthetic AI, is to
think that the creator of such AI would initially feed some explicit
knowledge about the world, and/or instructions about what to do with
such information. And then you'd "flip the switch" and the machine would
act like a regular adult human being.

Searle's view is similar to this. The chinese room experiment describes
explicit instructions being fed into the room, while this is exactly
what NOT to do (I think it shows his view of computer programming is too
limited).

Like I said, it is the *absolute requirement* for a conscious being,
that it figures the world out itself. And not just some part of it, but
EVERYTHING. It is not to be told "this is chair and this is table", or
even "look for these shapes for sitting down". It is just granted with
capacity to learn, the way I described learning, and then it's let go,
without any initial knowledge about anything.

When there is NO explicit information to base your worldview to
whatsoever, the system is limited to make *assumptions* about the world,
just like all of us. Few years down the road, consciousness would arise
*as an assumption* into the worldview, just like it arises in infants.

The learning mechanisms are what are explicitly instructed, but semantic
interpretation arises from the fact that our worldview is not "the
truth", it is incomplete, and unique to us all, according to experiences
and information we have come across. And it is the *only thing* on which
we can base our further assumptions and views of the world.
Self-awareness itself is not "the truth", it is merely an educated guess
from everything you have perceived and reasoned out around you. It's an
"opinion". You are, because you think you most likely are.

There's somehow this view that self-conscious system is in some ways the
top level, or absolute perfection or purity of intelligence. My view of
synthetic consciousness is not a "highest order of intelligence" in any
sense. Instead it's just a mundane pile of bullshit opinions, just like
all of us :)

> That is because all the computer can do is process data. You hit the
> computer's mouse hard and it can be formally programed to respond by
> saying "ouch".
>
> But it does not have the experience of pain and then interpret that
> experience in such a way that there is a visceral response as well as a
> deeper cognitive response like perhaps I should not inflict pain on
> other people because I do not like who this feels to me.

Really, semantic or "conscious interpetation" can arise from a machine
based on explicit set of instructions. I am not suggesting a machine for
which it is told that "this is pain" and "then say ouch". The machine is
not told anything about pain whatsoever. If this still sounds absurd to
you, I'm gonna have to understand your view of this better to respond.

I think here is a better way to point out the actual problem of explicit
instructions;

Searle's text points out, quite to the point, the common misconception
that people easily get when they are told that computers could run
conscious processes, much like organic brains.

My view is alike with Searle's;
"Is brain a digital computer?" - "No!"

"Everything is a computer" is just a saying, and a dangerous one
apparently. All it means is that everything is a process and can be
described through a physical simulation.

It is NOT suggested that digital computers are ANYTHING alike
structurally, to human brains. It isn't suggested that brain has 1's and
0's running inside. What is suggested is; Just like a computer can
simulate a race car on a track - even though computer is not a car - it
can simulate brains, even though computer is not a brain. (and NOTHING
like a brain, much like it is NOTHING like a race car)

Digital computer is an "explicit" calculator. The MOST it can do is to
hold explicit, mathematical representations of physical phenomenas. It
can simulate things. Or the be precise, it can run *approximate*
simulations of things.

Instead of asking if brain is a digital computer, we should ask if the
functions of the brain can be described with mathematics (and
consequently be subject to precise artificial simulation). And John R.
Searle correctly states "syntax is not intrinsic to physics." It's like
this:

Everything in nature can be described with mathematics, but such
descriptions are always merely APPROXIMATIONS (subject to human
knowledge and interpretation) of the *actual* physical features of the
system!

Much like an ultra-realistic race car simulation, or weather
simulation, is just an approximation of the real thing, no matter how
far you stretch it.

I think that is a statement we can all agree with?

Now the actual question boils down to whether an approximated simulation
of the brain could be enough, or would the system need to hold a
hypothetical "absolute truth" of the actual physical phenomena happening
inside the brain (in which case we couldn't achieve conscious machine).

We should not forget that even though the mathematical approximation
cannot describe the physical phenomena presicely, it can be *refined
until it is infinitely close to the actual thing*.

I think to say "yes, we need the actual thing", springs from the missing
knowledge about the structure of a conscious machine. From my view, it
is completely absurd to think that absolute perfection would be needed.
In an explicit system, NONE of the interpretations of perceptions, and
consequent decisions, need be explicit, even though the system is
running on an explicit platform, such as a computer system. (or on
organic matter, which is also based on explicit laws of nature)

Actually one way to describe my theory is that it is an explanation for
how explicit instructions manifestate themselves as semantic
interpretation, and part of the semantic interpretation, is an
interpretation that says "I exist". Does that make any sense to you?

If not, what if I told you I know (on high level) how to build an
explicit instruction set that doesn't know jack about the world, but yet
will learn spontaneously, to the point of being able to communicate
freely with people, with natural language. It would start to speculate
things. And would probably sooner or later come to ask you what happens
to it if it dies. And I'm not talking about sex either :)

>> Throughout the life of such a sophisticated organism, its brain
>> recognizes patterns coming from the sensory systems, and generates
>> logical associations between things through logical reasoning. That
>> is, stores basic information of what is known to be true about this or
>> that, and what relationships things have with each others -> a
>> worldview. It needs it to interpretate complex patterns more easily.
>> Upon building a large enough worldview, "self-awareness" emerges.
>>
> Thats is a big leap. From what the Buddhists call the "monkey brain" to
> self awareness. It suggests that there is a complexity to the vast
> experience of human consciousness that isn't going to be answered by
> silicon chips and microscopes.

Yeah yeah, and nothing heavier than air could possibly fly. Except
birds, but they are the absolute perfection of organic matter that
cannot be achieved with synthetic materials. :)

That's not intented as pure sarcasm btw, airplanes vs. birds are
actually very much analogous to how I see the alignment with artificial
consciousness and organic consciousness. I don't expect to see an
artificial consciousness to arise from exact replication of brain, just
like airplanes are not replications of birds. I expect to see a process
with a logical structure, capable of real "human" learning, in the exact
sense of the word.

> You may be better served by putting away the books that compare the
> brain to a computer and start looking at art, poetry, music in order
> to better understand your self as a conscious being.

I don't know how it's possible to consume more art than I do, but I'll
be sure to do it if I figure it out ;)

> Of the two alternatives, what philosophy is going to make you most
> worthy to call yourself a thinking human being?

That's a good point and I think that is part of the reason why there is
such a resistance to see the brain as a set of explicit instructions in
any sense. It takes away free will. And technically it does, but
practically, it doesn't. One way to put it is "you are forced to make
the seemingly best decisions at all times". Crap :)

Hmmm, I see I made another avalanche of text... But I hope it was not
unwarranted, and I hope it does clarify my worldview to you a little
better. I hope you can now formulate a reply that rings true better for
my worldview. Armed with the knowledge that I do understand the problem
with syntax vs. semantics VERY well. So well that Searle's view seems to
be almost exactly the view I used to hold few years ago.

Regards!
-Anssi

Anssi Hyytiainen

unread,
Aug 9, 2005, 3:44:47 AM8/9/05
to
Sir Frederick wrote:
> On Fri, 05 Aug 2005 18:08:29 +0300, Anssi Hyytiainen <ans...@nic.fi.ANTISPAM> wrote:
>>But then I am also afraid these kinds of discussions easily slide into
>>semantics. Everything I said above don't mean anything unless you
>>remember what meaning I placed on "worldview", and what I mean with
>>something being "you", and even then it is probably way too vague...
>>Heh, but what can we do? :)
>>
>>-Anssi
>
> I do several things :
> 1. Keep reading (reading "On Intelligence", subscribe to
> several 'Science' mags)
> 2. Challenge myself to figure out how to build a
> true machine intelligence, with 'consciousness' and qualia.
> 3. Resist the practice of magic in any understanding.
> 4. Observe the many existence proofs around in myself and
> others.
> 5. Find the riddle entertaining enough to hold my attention.
> 6. Get pissed at any putative God that would set this
> situation up.
> 7. Get pissed at myself at not solving the riddle.
> 8. Finding partial answers to the riddle disgusting.
> 9. Being a misanthropist such that any paradigm shift is
> welcome.
> 10. Patronize all cultures, canonic and historic.

Wow...! Are you me?

Hey, for 5. how about that teleport thing? We know our cells
re-generate. We know we are not tied to our material body in that sense.
Suppose your material organization can be scanned exactly the way you
are, you can be broken into pieces, and a new exact copy can be built
elsewhere accordingly. You think "you" would get transferred too?

If yes, what about if the source copy is NOT broken into pieces, but a
new copy is built anyway. Then what?

-Anssi

forbi...@msn.com

unread,
Aug 9, 2005, 9:17:56 AM8/9/05
to

I believe John Searle's points are not so well understood.

I will let stu give it a try. My world view is so vastly
different from yours that until we can find some common ground
there's not much to discuss.

I vote for none of the above. I don't believe human brains
will ever be replace by silion computers because there will
degradation of performance making such replacements unacceptable.
There may be some augmentation of performance in certain areas
but this will not be core to our being as humans. Qualia is very
tied to the way performance is implemented in the human brain.
Lower animals have it, or so I think, but I'd be hard pressed to
identify exactly from where in the brain qualia emerges or is
instantated or (?).

I was just reading the exchange between Curt Welch and 1Z in
comp.ai.philosophy under the subject line "Qualia Question".
1Z is much more competent at expressing a position that appears
to be close to mine than I am at expressing mine. None the less
I'm not sure those not already holding a similar position have
sufficient understanding to dismiss it as seems to be the case.

It seems so clear to me that qualia is not performance that I
cannot for the life of me understand how anyone who has qualia
could confuse the two. When I attempt to use the associated
behavioral differences as an indicator as to who has qualia
and who does not others who I believe have them pull be back.

We know John Searle's framework for discussing this issue has
failed. I know of no suceessful replacement.

If qualia is important to my performance then no machine without
qualia will be able to duplicate it. If my performance can be
duplicated on a computer without a good understanding as to what
qualia are and how they exist then it's likely qualia isn't
important to my performance. If qualia isn't important to my
performance then I am all alone because there could be no behavioral
evidence of its existence in others.

Anssi Hyytiainen

unread,
Aug 10, 2005, 7:09:07 AM8/10/05
to
Thanks for the reply!

forbi...@msn.com wrote:


> Anssi Hyytiainen wrote:
>>I assume your view is somewhat similar to that of John Searle. I can
>>assure you that his points are well understood at this end. However,
>>read on...
>
> I believe John Searle's points are not so well understood.
>
> I will let stu give it a try. My world view is so vastly
> different from yours that until we can find some common ground
> there's not much to discuss.

Yeah, all differences of opinion are because of different views of the
world. So we may be stuck, unless Stu or you can crack what is it
exactly in my worldview that causes me to interpretate Searle's core
argument as absurd (or vice versa).

Maybe it's helpful if I attempt to describe where I think Searle goes
wrong...

To summarize Searle's argument; "Because computers work in explicit
ways, through explicit instructions, they could not possibly have a
conscious experience the way we do, because we KNOW we are thinking
about something and we ARE making decisions CONSCIOUSLY. Explicit
instructions in computers are followed *precisely* without any
possibility to steer the thinking processes one way or another in a
conscious sense, and therefore computer processes cannot spring
consciousness"

Is that a fair summary?

Here's where I think is the absurdity;
Human brains themselves abide to the laws of nature. Laws of nature are
explicit. I.e. isn't it so, that the electrons in our brains also "do"
exactly what the laws of nature "instruct" them to do, in an explicit
manner, and nothing else? I.e. we are powerless to steer our brain
processes, our thoughts.

And yet, conscious experience raises, from explicit instructions of
nature! How could that be? (Hold on, I'll tell you)

I'm assuming Searle thinks there's a paradox there, because clearly we
are conscious about our thoughts and we do have a sense of own will. He
probably thinks there's some sort of larger unknown at present there
that should solve the paradox.

Well, explicit instructions can spring "semantic" systems, and it is not
a very complicated either (just unintuitive for unexperienced).

I'll describe you the rise of semantics in nature, the way I see it;

When you were born, you didn't know ANYTHING about the world. You didn't
know what is a wall, what is gravity, what is solidity, what is liquid,
or any of these concepts that are now present in your worldview. You
simply did not have a worldview. You had no information of the world
whatsoever.

What you did have, were sensory systems, and *explicitly* defined
processes for learning. The method of learning, is on a low level a
hardwired process, and can be replicated on a computer through explicit
instructions. Your brain gathers patterns coming in from the sensory
systems, in a methodical way, and attempts to puzzle together some
reason to the world around you. You learn you can't go through solid
objects. You learn the voice of your mother. etc.

Building of the first revisions of the worldview are analogous to trying
to understand a foreign language WITHOUT any explicit instructions. By
logics alone. Basically you are constrainted to make wild guessess and
assumptions, and test your assumptions to see if the foreign literature
starts to make any sense. Or in the case of learning the world, if the
world starts to make sense. (It is also the same process with how
infants learn language without any method of telling them about the
existence of such a thing as language)

Now, the initial worldview we come to build is going to be unique for
all of us because we come across different experiences (different
information). Even though *the end result* of the learning process *is
unique* for all, the *method of building* the worldview was *explicit*.

Furthermore, as we come across new information in the world, we
interpretate them according to what we have in our worldview already.
The interpretation process is explicitly defined, but the DATA that we
have in the worldview is unique to all of us, and it affects directly
the way we "view things", and the way we further build the worldview.
I.e. semantics arise.

(One of my original points was, that self-awareness itself is
"semantics", i.e. a matter of interpretating your worldview. It simply
is not a problem for our strong "capacity of learning" to also
"learn"(=come to reasonable assumption) we exist. Notice how
self-awareness also is an assumption; in a sense you cannot be certain
of it)

Notice how it was essential to the above process, that no information of
the world was explicitly "known" by the brain at birth (which is what
was the erroneous part of the chinese room experiment). And notice how,
if it is so, it forces us to only make educated assumptions about how
things work. Everything in the world are connected one way or another,
and one false assumption reverberates potentially over the whole worldview.

This is why it can be said we don't know anything for certain. We don't
know for certain if we exist or if it's just some sort of an illusion.
We can only choose to believe into the scenarios we think are most
likely, according to what we know of the world.

Nowhere is this as evident as in newsgroups. Especially in all sorts of
religious debates. Notice how this very process is evident in our
discussion; in my inability to "see" what you "see", and vice versa.
There is no explicit "bottom line knowledge" we can refer to, to
immediately prove our points.

And this whole learning system was built out of *explicit instructions*.
And yet there are no explicit instructions to shape how you interpretate
information and consequently how you behave at present time. *Learning
is explicit, experiences are implicit.* Our explicit instructions spring
a process which "takes it's shape" according to the incoming sensory
information, unique to all of us.

Now, what I think would be helpful if you can point out what you view as
absurdity in the above.

> I was just reading the exchange between Curt Welch and 1Z in
> comp.ai.philosophy under the subject line "Qualia Question".
> 1Z is much more competent at expressing a position that appears
> to be close to mine than I am at expressing mine. None the less
> I'm not sure those not already holding a similar position have
> sufficient understanding to dismiss it as seems to be the case.

I took a brief look at that and yeah, it really shows how differently
Curt and 1Z view things... I hope we can crack this better :)

Talk about qualia though, I can't see that as an issue at all.

I mean, the qualia of redness is just a matter of interpretation of the
world, with the aid of your worldview. Consider if everything was the
same shade of red in the world, you probably wouldn't be "aware" of all
the redness around you, because there would be no other colors to "point
out" the concept of colors to you. You wouldn't have such a concept as
colors. And consequently no concept of "red". You'd just have concept of
shapes.

All that "color" around you wouldn't seem like anything that matters and
were worth any kind of name, because it'd be omnipresent and therefore
would be "unnoticeable" in a sense.

What matters is that there are different colors, or "differences in what
we call colors", and that is something that enables us to learn "colors"
as a concept, and consequently grants us the possibility to "experience"
colors. Concept of "colors" and concept of "self" give us the
possibility to interpetate an event as "I see red color"

Furthermore, it doesn't matter if everybody experience redness exactly
the same way, as long as they can see the differences in colours in the
same way. And when they don't, such as in color blindness, it can be
tested by an outsider.

Does that sound absurd?

> If qualia is important to my performance then no machine without
> qualia will be able to duplicate it. If my performance can be
> duplicated on a computer without a good understanding as to what
> qualia are and how they exist then it's likely qualia isn't
> important to my performance. If qualia isn't important to my
> performance then I am all alone because there could be no behavioral
> evidence of its existence in others.

In my view, such an "explicit learning machine" as I described above,
would have an experience of qualia just like you do. Short way to put it
is; it seems clear to me that the experience of qualia springs from the
way you interpretate the world and events, according to your "learned"
worldview. In this case you have learned it is "you" who is experiencing
everything that comes in from the sensory systems, and therefore "I
experience red" is conceptually the way you interpretate the event of
"your eyes observing a red color". All through explicit instructions (of
learning), just like a machine.

To clarify this "world interpretation" stuff; as you know, the image on
the retina of your eye is upside down, but "righted" in the brain. Now,
it could be a simple brain process that simply turns the image upside
down before it reaches your conscious experience, or qualia of vision.

OR, it could be that your brain simply assumes at some point that things
can't really be the way they seem through visual sensors; the brain
"learns" things are actually upside down from what they visually seem,
and therefore it becomes to interpretate the visual information the way
it assumes is more correct. Or falls in line better with perceptions
through other sensory systems. Your qualia of vision is such that you
"experience" your visual stimuli as it should be interpretated,
according to your unique worldview.

The latter may sound pretty stupid at first, but then it could be that
the worldview naturally corrects many anomalies that the brain "learn"
to be wrong, and so being more effective than dozens of independent
correction systems.

The latter view also seems to be supported by the experiment of George
Stratton, where he wore special goggles that inverted his vision. Of
course at first everything looked inverted, but after several days, his
qualia was back to "normal"; he experienced everything was upright.
After removing the goggles, his vision was upside down again for a while.

http://www.madsci.org/posts/archives/mar97/858984531.Ns.r.html

http://wolfstone.halloweenhost.com/TechBase/litadv_AdvancedLightingConcept.html

He also reported there were some anomalies in his visual experience with
the goggles, even when he was already experienced things were upright,
which I suspect is because the inverted sensory information has
reverberations into many parts of the worldview, that assume otherwise
than the sensory information and thus make incorrect corrections. Or to
be more presice, other "learned corrections" of visual information
didn't work the way they used to. Such wrong assumptions are the basis
of many optical illusions, amply found from the net;

http://www.michaelbach.de/ot/

I assume if one wore such goggles from an infant, he would find similar
anomalies only after removing the goggles few years down the road.

Hmmm, I think this discussion is getting interesting... :)

-Anssi

Message has been deleted
Message has been deleted

forbi...@msn.com

unread,
Aug 10, 2005, 8:19:24 PM8/10/05
to
Awk. I wanted to add comp.ai.philosophy back in. I forgot to do so
so I added it then posted as second message. with comp.ai.philosophy
but the message didn't include my new text. Unfortunately, when
I deleted the message with my text and only alt.philosophy I lost
my new text. OK. I'll try again. Sorry.

I'll skip Searle's argument and use a variant of my own.

Occam's Razor tells us not to add unnecessary entities.

Those of us who have awearness and know we have it know we use it
to control our behavior. This doesn't mean this is the only way
to control behavior, only that that's the way those who know they
have it know they do it. It could be the case that awareness
is a property of matter or certain matter. I'd like to think so
since I'm not all that fond of disembodied awareness.

If awareness is necessary for human performance and humans are
machines then no machine without awareness will be able to duplicate
human performance. I don't know if awareness is necessary for human
performance, I just know I have it and I use it for my performance.

All designed computer performance can be explained without an
appeal to awareness. Adding awareness to the mix adds nothing to
the explanation of computer performance. If computers can be
designed to duplicate human performance then awareness isn't needed
to explain it.

The duplication of human performance on a computer would show
theorists would have to look elsewhere to explain awareness since
human behavior would be explainable without an appeal to awareness.

Not all important behaviors can be duplicated on a computer.
For instance, while the behaviors of a set of hydrogen atoms
can be simulated on a computer the simulation would not float
a real balloon in the same way as a set of hydrogen atoms.

If the physics supporting awareness are tied to the particulars
of the physical system, like the properties of hydrogen atoms
that allow hydrogen molecules to float a balloon, and awareness
is indeed necessary for human performance then before we can
design machines that duplicate human performance we will have
to have an understanding of the physics supporting awareness
and design it into our machines. These machines will not be
"computers" in the sense we currently use the term.

Publius

unread,
Aug 10, 2005, 8:23:14 PM8/10/05
to
forbi...@msn.com wrote in news:1123710767.176009.12330
@g44g2000cwa.googlegroups.com:

> Computers might have awareness but production of the right behaviors
> isn't sufficient evidence unless one has an irrefutable argument that
> awareness is required to produce the behaviors but if one does then
> the believe that silicon computers can replicate all human behaviors
> is just that a belief and holds no more weight than that a computer
> simulation of enough hydrogen atoms can float a balloon.

The problem with that argument is that it applies to other people as well.
While you may have direct knowledge of your own consciousness, you have no
direct knowledge of anyone else's. Thus, even though their behaviors may
resemble yours (as may the computer's), you have no better grounds for
attributing consciousness to them than to the computer. They may be
zombies.

You're gonna have to give up the direct knowledge premise to avoid this
problem.

forbi...@msn.com

unread,
Aug 10, 2005, 8:35:33 PM8/10/05
to

I'm not all that concerned about other's awareness. It pleases me to
assume they have it. I just don't want my brain replaced by a machine
not known to have awareness.

I've stated many times that I do not assume the qualia experienced
by others is identical to my own when we are both exposed to the
same stimulii. It seems unlilely to me they would have the exact
same qualia since their brain isn't exactly like mine.

Publius

unread,
Aug 10, 2005, 8:53:41 PM8/10/05
to
forbi...@msn.com wrote in
news:1123720533.5...@z14g2000cwz.googlegroups.com:

> I'm not all that concerned about other's awareness. It pleases me to
> assume they have it.

Then why not assume a computer, whose behavior is indistinguishable from
that of a person, also has it?

Wolf Kirchmeir

unread,
Aug 10, 2005, 9:47:56 PM8/10/05
to
forbi...@msn.com wrote:
[...]

>
> I'll skip Searle's argument and use a variant of my own.
>
> Occam's Razor tells us not to add unnecessary entities.
>
> Those of us who have awearness and know we have it know we use it
> to control our behavior. [...]


Actually, the most we can say is that we _think_ we use awareness to
control our behaviour. Several attempts to tease out the relative
timings of brain events and overt behaviour suggest that awareness is a
side-effect, not a cause, of whatever brain mechanism(s) determine(s)
what a human will "decide" to do. Until the results of these experiments
are refuted or otherwise accounted for, we cannot be certain that
consciousness/awareness is a control mechanism. That "conscious
planning" plays a role in deciding future action doesn't refute this
point, since there is no apparent way in which we consciously decide
what we will make plans about. We just -- make plans. And NB that we do
so in response to some external discriminants, such as someone telling
us that we will have a birthday party for Joe this Saturday and would
you like to help? Our response depends on whatever other discriminants
occur around the same time as this meassage; and of course later
discriminants may alter our behaviour, so that we stop planning. Etc.

BTW, IMO Searle just doesn't understand how computers actually work.
Hence his Chinese Room analogy/argument is utterly beside the point.
It's so far off base, it isn't even wrong.

Wolf Kirchmeir

unread,
Aug 10, 2005, 9:50:41 PM8/10/05
to
forbi...@msn.com wrote:
[...]

> Not all important behaviors can be duplicated on a computer.
> For instance, while the behaviors of a set of hydrogen atoms
> can be simulated on a computer the simulation would not float
> a real balloon in the same way as a set of hydrogen atoms.

But it would float a simulated ballon just fine....

> If the physics supporting awareness are tied to the particulars
> of the physical system, like the properties of hydrogen atoms
> that allow hydrogen molecules to float a balloon, and awareness
> is indeed necessary for human performance then before we can
> design machines that duplicate human performance we will have
> to have an understanding of the physics supporting awareness
> and design it into our machines. These machines will not be
> "computers" in the sense we currently use the term.

I concur, since you've constructed a valid argument. But say "would not
be" to emphasise that you're hypothesising.

JGCASEY

unread,
Aug 10, 2005, 11:01:57 PM8/10/05
to

forbi...@msn.com wrote:

[...]

> Occam's Razor tells us not to add unnecessary entities.
>
>
> Those of us who have awearness and know we have it
> know we use it to control our behavior.


I don't think we *use* awareness to control behaviour.
We know we have a steering wheel etc and we know we
use those devices to control the behavior of our car
but I don't see awareness as something you *use* to
control your own behavior.


> This doesn't mean this is the only way to control
> behavior, only that that's the way those who know
> they have it know they do it.


You are aware of your decisions and choices but that
awareness is not the cause of the decision.


> It could be the case that awareness is a property
> of matter or certain matter. I'd like to think so
> since I'm not all that fond of disembodied awareness.


Indeed I agree with that :)

> If awareness is necessary for human performance and
> humans are machines then no machine without awareness
> will be able to duplicate human performance. I don't
> know if awareness is necessary for human performance,
> I just know I have it and I use it for my performance.


I suspect it may turn out that any machine capable
of human performance would also have awareness.


> All designed computer performance can be explained
> without an appeal to awareness. Adding awareness to
> the mix adds nothing to the explanation of computer
> performance. If computers can be designed to
> duplicate human performance then awareness isn't
> needed to explain it.
>
>
> The duplication of human performance on a computer
> would show theorists would have to look elsewhere to
> explain awareness since human behavior would be
> explainable without an appeal to awareness.


A bridge to cross if we ever get to it?

> Not all important behaviors can be duplicated on
> a computer. For instance, while the behaviors of a
> set of hydrogen atoms can be simulated on a computer
> the simulation would not float a real balloon in the
> same way as a set of hydrogen atoms.


AI is not a simulation of human intelligence or
of its components, neurons. It is said to aim
for the same function, "intelligence". An analogy
often given is the function of flight. A bird has
the function of flight and so does the aeroplane.

We are not trying to simulate a brain so much as we
are trying to get a machine to perform the same
functions. These are real functions effecting the
real world. Unlike your simulated hydrogen atom
that simply works in a simulated world these are
real physical machines connected to the real world.

To what extent awareness can exist within a virtual
reality world by its characters is another question.
We are certainly aware of our thoughts and feelings
while dreaming.


> If the physics supporting awareness are tied to the
> particulars of the physical system, like the properties
> of hydrogen atoms that allow hydrogen molecules to
> float a balloon, and awareness is indeed necessary
> for human performance then before we can design
> machines that duplicate human performance we will
> have to have an understanding of the physics supporting
> awareness and design it into our machines. These
> machines will not be "computers" in the sense we
> currently use the term.


It may turn out to be the functions not the physics
that support awareness.

To the extent that awareness is part of reality I
see it as defined by the NOW of the time line. In
physics one 'now' is as good as any other 'now'.
There is no definition in physics of the "Awareness
Now".

JC

forbi...@msn.com

unread,
Aug 10, 2005, 11:47:19 PM8/10/05
to
i doubt one will be built. At least not
a computer as currently envisioned.

forbi...@msn.com

unread,
Aug 10, 2005, 11:54:46 PM8/10/05
to
JGCASEY wrote:

> forbi...@msn.com wrote:
> It may turn out to be the functions not the physics
> that support awareness.

That seems too disembodied for me.

> To the extent that awareness is part of reality I
> see it as defined by the NOW of the time line. In
> physics one 'now' is as good as any other 'now'.
> There is no definition in physics of the "Awareness
> Now".

I think this indicates a problem with our model of
reality.

Anssi Hyytiainen

unread,
Aug 11, 2005, 6:56:13 AM8/11/05
to
forbi...@msn.com wrote:
> Awk. I wanted to add comp.ai.philosophy back in. I forgot to do so
> so I added it then posted as second message. with comp.ai.philosophy
> but the message didn't include my new text. Unfortunately, when
> I deleted the message with my text and only alt.philosophy I lost
> my new text. OK. I'll try again. Sorry.

Heh, ok, I'm skipping the post which went only to alt.philosophy :)

I would hope comp.ai.philosophy folks to quickly browse through the
thread at alt.philosophy to see what this is about. (=One way to put it:
I'm making a wild claim to know how a logical structure defined with
explicit instructions should spring semantics and self-awareness exactly
as we know it)

> I'll skip Searle's argument and use a variant of my own.
>
> Occam's Razor tells us not to add unnecessary entities.

Agreed, I'm a huge fan of that principle :)

> Those of us who have awearness and know we have it know we use it
> to control our behavior. This doesn't mean this is the only way
> to control behavior, only that that's the way those who know they
> have it know they do it. It could be the case that awareness
> is a property of matter or certain matter. I'd like to think so
> since I'm not all that fond of disembodied awareness.

That's a fair feeling, but you should be careful not to use it as any
kind of "bottom line". Otherwise, when evidence begins to point
otherwise, you may find yourself looking for alternative explanations to
such evidence, and when looking hard enough, you can always find
something (it just needs a little bending of your worldview).

I mean the same way how, for example, religious people, like so called
"creationists" find evidence for their beliefs. This is, in one sense,
what Occam's Razor also warns us about.

"Machines can become conscious" is not any kind of bottom line for me.
It's something I came to realize by how things work, to my understanding.

> If awareness is necessary for human performance and humans are
> machines then no machine without awareness will be able to duplicate
> human performance. I don't know if awareness is necessary for human
> performance, I just know I have it and I use it for my performance.

I agree, and I do suspect strongly that a machine without awareness
would necessarily be such a poor emulation of human behaviour that it
would show sooner or later. What I mean with emulation here is a machine
that has been explicitly instructed about some properties of the world
(instead of learning (=assuming) everything from scratch), and is not
capable of human like learning.

> All designed computer performance can be explained without an
> appeal to awareness.

Yes, definitely. We haven't been able to build a learning system that
could learn about awareness fully on its own.

All designed computer systems so far have been explicitly instructed
about the necessary things they need to know, and they "know" about them
exactly, without any room for semantics and interpretation. This is
because such programming is easier, fits the "purpose", and doesn't
require months or years of hard learning on the part of the artificial
system before it could do anything at all.

Let's still mention, if it isn't obvious, that no one has actually been
able to *design* such a *free* learning system (capable of learning from
scratch) that our brain is, even conceptually. So explicit instructions
about the properties of the world are not only the "easier" route, but
the "only" route we are capable of following currently. It is not a
technical impossibility however to build a free learning system, albeit
it probably requires, for practical reasons, a simulation of evolution
(and programs have been created very succesfully through evolution
processes).

> Adding awareness to the mix adds nothing to
> the explanation of computer performance.

Hmmm, here I think our thinking of "awareness" as a concept differs. I
cannot see how awareness could be "added" to a specific behaviour system
without it affecting the behaviour.

My view is that an "experience of awareness" springs naturally from a
system that is capable of "free learning", and is powerful enough at
that "learning" (logical reasoning & building of worldview accordingly)

> If computers can be
> designed to duplicate human performance then awareness isn't needed
> to explain it.
>
> The duplication of human performance on a computer would show
> theorists would have to look elsewhere to explain awareness since
> human behavior would be explainable without an appeal to awareness.

Yeah. So let's agree that awareness - as we know it - IS needed to
duplicate our behaviour. Otherwise the system is just an emulation (i.e.
"Imitation").

> Not all important behaviors can be duplicated on a computer.
> For instance, while the behaviors of a set of hydrogen atoms
> can be simulated on a computer the simulation would not float
> a real balloon in the same way as a set of hydrogen atoms.

How do you mean this? I mean, that to me, is totally false argument. If
you had an accurate enough simulation of virtual hydrogen atoms and
virtual rubber balloon, it would allow you to inflate the rubber balloon
just like in real life. And if you simulate a lighter gas around the
balloon, such as the air around us, it would float like a real balloon.

I think this is related to another thing you said at first attempt of
this post. I quote:
"hydrogen atoms have properties (behaviors) simulations of hydrogen
atoms do not."

I don't understand what you mean by that exactly. I mean, simulation is
ABOUT modeling the behaviours of things. What do you mean with
simulation of hydrogen atoms not having behaviours? Even if a simulation
cannot contain the behaviour with 100% accuracy (due to things like
"uncertaintly principle"), the accuracy can be *infinitely close* to 100%.

Granted, if you simulate the behaviour of hydrogen and rubber compound
at an atomic level, to simulate a whole hydrogen balloon would need some
heavy calculations and would probably not run very fast on current
computers. Which is why we always make approximations appropriately to
the needs of the simulation. Weather forecast simulations don't simulate
the whole earth at an atomic level. But as more and more computer
processing capacity becomes available, we do push the forecasts
simulations into smaller and smaller granularity level continuously, for
added precision.

Simulations of physics are being treated the same way, and to see the
level where we are currently, take a look at:
http://www.cs.berkeley.edu/b-cam/PapersPage.html
There are videos of the simulations at "Details/Examples". Be sure to
check out at least all the Gas, Fluid, Sound and Fracture simulations,
they are really cool ;)

So, of course we can simulate things at an atomic level, or even at a
sub-atomic level. In theory, we could se next weeks weather by
simulating the elementary particles of the solar system. (Not that it
would make much sense, and our current knowledge of elementary particles
may not be accurate enough either).

And I think I should mention here, that I really don't think we'd need
to go to simulate things at an atomic level in the case of
consciousness. All we need is logical counterparts to high level
functions of human brain tissue. Much like virtual wind tunnel
simulations don't need to simulate the objects at atomic level.

> If the physics supporting awareness are tied to the particulars
> of the physical system, like the properties of hydrogen atoms
> that allow hydrogen molecules to float a balloon, and awareness
> is indeed necessary for human performance then before we can
> design machines that duplicate human performance we will have
> to have an understanding of the physics supporting awareness
> and design it into our machines.

Exactly. But huge emphasis on the word *design*. Here's a very important
point; We, the people, are natural "designers". We naturally see
authorship and intentionality everywhere where there is any sort of
"specific organization". Such as the specific organization of matter
found in organic species, which caused an assumption that there has been
an intelligent design to all organisms. It required much more knowledge
and "effort" to see how complex organisms can be explained without any
conscious authorship at all, through evolutionary process. (This too,
was a matter of interpretating the world according to your worldview. If
one doesn't understand emergent processes, he cannot see "evolution")

Likewise, when we look at an ant colony, we see apparent intentionality
to the collective behaviour, making it seem like there is some sort of
queen ant authority giving out orders. When in fact the collective
behaviour springs from emergent process of local behaviours of
individual ants. (Likewise pretty well understood these days)

When we look at computer programs, we naturally think they need to be
designed, or *authored* into specific purpose. The current era of
computer programming is philosophically a "creationist era", but we are
approaching a profoundly darwinian era, in the sense that complex
problems can be solved better by "growing" programs instead of designing
them. The more complex the problem, the harder it is to come to a
"solution" by conscious design. This is what I meant with the "free
learning" machine having to be "grown" through evolutionary simulation,
instead of being designed.

A simple example of this is a number sorting program "grown" by Danny
Hillis. Designing a number sorting program is kind of benchmark test for
testing the "genius" of the programmer. You try to design a program that
can sort 100 random numbers at minimum steps. The world record was 60
steps when Hillis decided to give it a try. Instead of authoring a
number sorting program however, he *authored a program that created a
number sorting program* by evolving one.

His little experiment spit out a program that sorted 100 numbers at 62
steps, just two shy of world record (and with such a simple problem).
And even with such a simplicity, Hillis states in his book The Pattern
on the Stone; "One of the interesting things about the sorting programs
that evolved in my experiment is that I do not understand how they work.
I have carefully examined their instruction sequences, but I do not
understand them: I have no simpler explanation of how the programs work
than the instruction sequences themselves. It may be that the programs
are not understandable" (Read: The programs are such a logistical
spaghetti that it's almost impossible to understand the logical
structure behind the process)

Anyway, the world is full of potential applications for such method of
"growing" of programs, and one of them is creation of open-ended
learning machine.

> These machines will not be
> "computers" in the sense we currently use the term.

Yeah, precisely so. I think computers will always represent "excplicit
problem solving machines". Conscious intelligence cannot solve similar
problems that computers do, they can only intepretate semantics. And
therefore conscious intelligence could not "replace" computers in any
sense of the word.

And here I think I should still clarify - for any new readers - that I'm
not talking about "a computer" springing consciousness per se. I'm
talking about a simulation of a conscious virtual being. And this
simulation just happens to be reasonable to actualize with a computer.
The CPU of a computer is not conscious any more than a brain tissue is
conscious. But the logical "high level" processes would spring
consciousness.

Regards
-Anssi

Anssi Hyytiainen

unread,
Aug 11, 2005, 7:10:51 AM8/11/05
to
Wolf Kirchmeir wrote:
> forbi...@msn.com wrote:
> [...]
>
>>
>> I'll skip Searle's argument and use a variant of my own.
>>
>> Occam's Razor tells us not to add unnecessary entities.
>>
>> Those of us who have awearness and know we have it know we use it
>> to control our behavior. [...]
>
>
>
> Actually, the most we can say is that we _think_ we use awareness to
> control our behaviour. Several attempts to tease out the relative
> timings of brain events and overt behaviour suggest that awareness is a
> side-effect, not a cause, of whatever brain mechanism(s) determine(s)
> what a human will "decide" to do.

Yeah, agreed. That is exactly the conclusion I came to myself when
thinking about the logical structure I think springs consciousness.

It's like this; we interpretate the world as "I experience this and
that" because we have "learned" we do exist (which is just an
assumption). But when we consciously struggle to make a decisions about
some problem, like which car to buym we "feel" we are consciously
looking for a solution. But we are, in fact, looking for associated
knowledge from our worldview, that should play a part to the decision of
buying a car. We are technically "forced" to come up with the solution
that seems like the best solution for the problem, all things considered.

If you want to disprove the above by choosing the "next best option",
you don't really understand what was meant by it, and you again made the
"most effective solution", all things considered (incl. the desire to
disprove the argument)

And this is not just a funny philosophical wordplay, it is EXACTLY what
happens.

> BTW, IMO Searle just doesn't understand how computers actually work.
> Hence his Chinese Room analogy/argument is utterly beside the point.
> It's so far off base, it isn't even wrong.

Agreed. Better version of chinese room experiment would be a guy who is
in the room, and is being fed ONLY with chinese literature (NOT with any
explicit instructions). And then he would need to guess the meanings of
symbols and see if the incoming chinese literature appears to make any
sense. Few years down the road, even after he's "cracked" the "secrets"
of the language, he is still forever forced to make mere educated
guesses of the meanings of symbols, and is likely to refine his
assumptions and still be slightly off-base to the end of his life. The
way we are with nature.

Still I wouldn't use that thought experiment because it includes the
assumption that the guy in the room knows how things are in the outside
world, and can use that as an "explicit" information to crack the
secrets of the language. Which should not be allowed in an open ended
learning system.

-Anssi

forbi...@msn.com

unread,
Aug 11, 2005, 7:12:11 AM8/11/05
to

Wolf Kirchmeir wrote:
> forbi...@msn.com wrote:
> [...]
> >
> > I'll skip Searle's argument and use a variant of my own.
> >
> > Occam's Razor tells us not to add unnecessary entities.
> >
> > Those of us who have awearness and know we have it know we use it
> > to control our behavior. [...]
>
> Actually, the most we can say is that we _think_ we use awareness to
> control our behaviour.

Where "think" is an aware process.

> Several attempts to tease out the relative
> timings of brain events and overt behaviour suggest that awareness is a
> side-effect, not a cause, of whatever brain mechanism(s) determine(s)
> what a human will "decide" to do.

This suggests we know the relationship between brain activity and
awareness. What would be the point of awareness if it a mere
side effect of down stream processes from the determinant of behavior?
Why would this portion of the brain evolve into existence if it is
unnecessary for survival?

> Until the results of these experiments
> are refuted or otherwise accounted for, we cannot be certain that
> consciousness/awareness is a control mechanism. That "conscious
> planning" plays a role in deciding future action doesn't refute this
> point, since there is no apparent way in which we consciously decide
> what we will make plans about. We just -- make plans. And NB that we do
> so in response to some external discriminants, such as someone telling
> us that we will have a birthday party for Joe this Saturday and would
> you like to help? Our response depends on whatever other discriminants
> occur around the same time as this meassage; and of course later
> discriminants may alter our behaviour, so that we stop planning. Etc.

I hope there are coorelates between brain activity and mental activity.
It seems if our model of physics cannot account for awareness as a
control mechanism and awareness isn't just an unnecessary side effect
then the problem lies with our model.

What you've asserted seems to indicate a situation where computers
could produce human performance without awareness since you consider
it a side effect of something down stream from that which acutally
produces the behavior.

> BTW, IMO Searle just doesn't understand how computers actually work.
> Hence his Chinese Room analogy/argument is utterly beside the point.
> It's so far off base, it isn't even wrong.

Searle's Chinese Room analogy doesn't carry much water for those who
believe computers can be made to duplicate human performance. I
don't believe computers (as currently envisionsed) can duplicate
human performance even though I am a strong proponent of physical
based awareness.

An interesting situation could arise where a simulation of human
behaviors in a simulated world on a computer could require "spooky
action at a distance" or some non-"time linear" functions. While
this would violate the time dependent cause/effect relationship
most of us hold dear, if it solved the problem of producing simualted
human performance in a simulated world on a computer it would help
us understand the nature of nature. I'm so sure of awareness's
role in human performance that I'm even willing to give up this
very basic portion of our model of physics.

The production of a working simulation (as opposed to an envisioning
of one) will decide the issue.

forbi...@msn.com

unread,
Aug 11, 2005, 7:16:21 AM8/11/05
to
Anssi Hyytiainen wrote:

> Wolf Kirchmeir wrote:
> > BTW, IMO Searle just doesn't understand how computers actually work.
> > Hence his Chinese Room analogy/argument is utterly beside the point.
> > It's so far off base, it isn't even wrong.
>
> Agreed.

I hope all who think Searle provides a useful framework in which
to discuss the issue of awareness can see this proof by example
that he does not.

Anssi Hyytiainen

unread,
Aug 11, 2005, 7:24:11 AM8/11/05
to
JGCASEY wrote:

> forbi...@msn.com wrote:
>>Those of us who have awearness and know we have it
>>know we use it to control our behavior.
>
> I don't think we *use* awareness to control behaviour.
> We know we have a steering wheel etc and we know we
> use those devices to control the behavior of our car
> but I don't see awareness as something you *use* to
> control your own behavior.

I think the steering wheel thing is what he meant exactly. But we don't
"know" we have a steering wheel, we "feel" we have a steering wheel.
There's a difference.

If you make it a bottom line that you DO have a steering wheel, you are
going to run into a paradox with the fact that the laws of nature are
explicit and control your brain processes at the lowest level. You
cannot suppose your thoughts bend the laws of nature, allowing you to
control your brain impulses. Or if you do suppose that way, you need to
explain where does your such "thoughts" then exist, and if not in your
head, what reason do you have to bend the laws of nature, etc...
Basically you run headlong into a wall if you go down that road. It's an
authorship fallacy at its heart, just like creationism.

-Anssi

forbi...@msn.com

unread,
Aug 11, 2005, 7:51:59 AM8/11/05