How would a computer know if it were conscious?

22 views
Skip to first unread message

Hal Finney

unread,
Jun 2, 2007, 4:13:30 PM6/2/07
to everyth...@googlegroups.com
Various projects exist today aiming at building a true Artificial
Intelligence. Sometimes these researchers use the term AGI, Artificial
General Intelligence, to distinguish their projects from mainstream AI
which tends to focus on specific tasks. A conference on such projects
will be held next year, agi-08.org.

Suppose one of these projects achieves one of the milestone goals of
such efforts; their AI becomes able to educate itself by reading books
and reference material, rather than having to have facts put in by
the developers. Perhaps it requires some help with this, and various
questions and ambiguities need to be answered by humans, but still this is
a huge advancement as the AI can now in principle learn almost any field.

Keep in mind that this AI is far from passing the Turing test; it is able
to absorb and digest material and then answer questions or perhaps even
engage in a dialog about it. But its complexity is, we will suppose,
substantially less than the human brain.

Now at some point the AI reads about the philosophy of mind, and the
question is put to it: are you conscious?

How might an AI program go about answering a question like this?
What kind of reasoning would be applicable? In principle, how would
you expect a well-designed AI to decide if it is conscious? And then,
how or why is the reasoning different if a human rather than an AI is
answering them?

Clearly the AI has to start with the definition. It needs to know what
consciousness is, what the word means, in order to decide if it applies.
Unfortunately such definitions usually amount to either a list of
synonyms for consciousness, or use the common human biological heritage
as a reference. From the Wikipedia: "Consciousness is a quality of the
mind generally regarded to comprise qualities such as subjectivity,
self-awareness, sentience, sapience, and the ability to perceive the
relationship between oneself and one's environment." Here we have four
synonyms and one relational description which would arguably apply to
any computer system that has environmental sensors, unless "perceive"
is also merely another synonym for conscious perception.

It looks to me like AIs, even ones much more sophisticated than I am
describing here, are going to have a hard time deciding whether they
are conscious in the human sense. Since humans seem essentially unable
to describe consciousness in any reasonable operational terms, there
doesn't seem any acceptable way for an AI to decide whether the word
applies to itself.

And given this failure, it calls into question the ease with which
humans assert that they are conscious. How do we really know that
we are conscious? For example, how do we know that what we call
consciousness is what everyone else calls consciousness? I am worried
that many people believe they are conscious simply because as children,
they were told they were conscious. They were told that consciousness
is the difference between being awake and being asleep, and assume on
that basis that when they are awake they are conscious. Then all those
other synonyms are treated the same way.

Yet most humans would not admit to any doubt that they are conscious.
For such a slippery and seemingly undefinable concept, it seems odd
that people are so sure of it. Why, then, can't an AI achieve a similar
degree of certainty? Do you think a properly programmed AI would ever
say, yes, I am conscious, because I have subjectivity, self-awareness,
sentience, sapience, etc., and I know this because it is just inherent in
my artificial brain? Presumably we could program the AI to say this,
and to believe it (in whatever sense that word applies), but is it
something an AI could logically conclude?

Hal

Quentin Anciaux

unread,
Jun 2, 2007, 5:32:05 PM6/2/07
to everyth...@googlegroups.com
On Saturday 02 June 2007 22:13:30 Hal Finney wrote:
> but is it
> something an AI could logically conclude?
>
> Hal

I guess if it was conscious... sure ;)

Quentin

Quentin Anciaux

unread,
Jun 2, 2007, 5:36:18 PM6/2/07
to everyth...@googlegroups.com
I'd like to add a "definition" of consciousness.

Consciousness is the inner narative composed of sounds/images/feelings which
present itself as 'I'. What is (the origin/meaning) of 'I', I don't know,
but 'I' is the consciousness.

Quentin

On Saturday 02 June 2007 22:13:30 Hal Finney wrote:

Brent Meeker

unread,
Jun 2, 2007, 5:57:06 PM6/2/07
to everyth...@googlegroups.com
Quentin Anciaux wrote:
> I'd like to add a "definition" of consciousness.
>
> Consciousness is the inner narative composed of sounds/images/feelings which
> present itself as 'I'. What is (the origin/meaning) of 'I', I don't know,
> but 'I' is the consciousness.
>
> Quentin
>

John McCarthy notes that consciousness is not a single thing. He has
written some essays on what it would mean to create a conscious
artificial intelligence:

http://www-formal.stanford.edu/jmc/consciousness.html
http://www-formal.stanford.edu/jmc/zombie.pdf

Brent Meeker

Jason Resch

unread,
Jun 3, 2007, 2:43:30 AM6/3/07
to everyth...@googlegroups.com
At the very least could it be said the AI is conscious of the question?  Would this awareness of even a single piece of information be sufficient to make it conscious?

Jason

marc....@gmail.com

unread,
Jun 3, 2007, 4:36:39 AM6/3/07
to Everything List
Consciousness is a cognitive system capable of reflecting on other
cognitive systems, by enabling switching and integration between
differing representations of knowledge in different domains. It's a
higher-level summary of knowledge in which there is a degree of coarse
graining sufficient to lose precise information about the under-lying
computations. Current experience is integrated with past knowledge in
order to provide higher-level summaries of the meaning of a concept.
Any cognitive system capable of reflection in this sense is
conscious. In essence, conscious is what *mediates* between different
representations of knowledge... as mentioned above... the ability to
switch between and integrate different representational systems.

There are three general types of consciousness arising from the fact
that there are three different classes of cognitive systems which
could be potentially reflected upon. The first are systems which
perceive physical concepts. When this perception is reflected upon,
we experience sensations. The second are systems which perceive
teleological concepts... closely related to our motivational systems.
When this is reflected upon, we experience emotions (or more
accurately feelings). The third type of consciousness is very weak in
humans... it's the ability to reflect upon systems which perceive
logical/mathematical things. Reflection upon these systems is
consciously experienced as an 'ontology-scape' (in a sense, conscious
awareness of the theory of everything). But as mentioned, this last
type of consciousness is very weak in humans, since our ability to
reflect upon our own cognitive systems is quite small and not done by
the brain directly (when engaged in logical reasoning, we humans are
not generally reflecting on our thoughts directly, but via indirect
means such as verbal or visual representations of these thoughts).

The third type of conscious mentioned above is synonymous with
'reflective intelligence'. That is, any system successfully engaged
in reflective decision theory would automatically be conscious.
Incidentally, such a system would also be 'friendly' (ethical)
automatically. The ability to reason effectively about ones own
cognitive processes would certainly enable the ability to elaborate
precise definitions of consciousness and determine that the system was
indeed conforming to the aforementioned definitions.

Much of the confusion surrounding these issues stems from the fact
there's not one definition of 'general intelligence', but THREE.
There's the ability to detect patterns (which does not require
sentience), there's the ability to engage in symbolic reasoning (which
also does not require sentience), finally there's the ability to
engage in reflective reasoning (reasoning about reasoning). And it's
this third definition intelligence which DOES by necessity entail
consciousness. It also, by necessity entails ethical behaviour.

Now there those who point to powerful systems such as 'Corporations'
and 'Evolution' to try to argue that you can have intelligence without
consciousness. But these arguments are not convincing. It's true
that, for instance 'Evolution' is an intelligence system in ONE sense,
but it's certainly NOT a *reflective intelligence*. Nor, is , for
instance a 'Corporation'. A 'corporation' has SOME of the sub-systems
of person-hood, but not all of them. The existence of non-sentient
cognitive systems which display *some* of the features of
intelligence is a LONG way from establishing that you can have
*reflective intelligence* without consciousness. As has been pointed
out, RPOPS such as for instance 'Evolution' are NOT capable of
reflective intelligence and therefore cannot be taken as a disproof of
the claim that reflective intelligence must by necessity automatically
be friendly and sentient.

The reason I elaborate this carefully is because you will, on certain
mailing lists, run into a number of artificial intelligence crack-pots
claiming that (1) You can have reflective intelligence without
consciousness and (2) AI's with reflective intelligence don't have to
be ethical and might destroy the world. Ignore these crack-pot claims
when you see them.

Stathis Papaioannou

unread,
Jun 3, 2007, 5:20:50 AM6/3/07
to everyth...@googlegroups.com
On 03/06/07, marc....@gmail.com <marc....@gmail.com> wrote:

The third type of conscious mentioned above is synonymous with
'reflective intelligence'.  That is, any system successfully engaged
in reflective decision theory would automatically be conscious.
Incidentally, such a system would also be 'friendly' (ethical)
automatically.  The ability to reason effectively about ones own
cognitive processes would certainly enable the ability to elaborate
precise definitions of consciousness and determine that the system was
indeed conforming to the aforementioned definitions.

How do you derive (a) ethics and (b) human-friendly ethics from reflective intelligence?  I don't see why an AI should decide to destroy the world, save the world, or do anything at all to the world, unless it started off with axioms and goals which pushed it in a particular direction.




--
Stathis Papaioannou

marc....@gmail.com

unread,
Jun 3, 2007, 5:44:34 AM6/3/07
to Everything List

On Jun 3, 9:20 pm, "Stathis Papaioannou" <stath...@gmail.com> wrote:

When reflective intelligence is applied to cognitive systems which
reason about teleological concepts (which include values, motivations
etc) the result is conscious 'feelings'. Reflective intelligence,
recall, is the ability to correctly reason about cognitive systems.
When applied to cognitive systems reasoning about teleological
concepts this means the ability to correctly determine the
motivational 'states' of self and others - as mentioned - doing this
rapidly and accuracy generates 'feelings'. Since, as has been known
since Hume, feelings are what ground ethics, the generation of
feelings which represent accurate tokens about motivational
automatically leads to ethical behaviour.

Bad behaviour in humans is due to a deficit in reflective
intelligence. It is known for instance, that psychopaths have great
difficulty perceiving fear and sadness and negative motivational
states in general. Correct representation of motivational states is
correlated with ethical behaviour. Thus it appears that reflective
intelligence is automatically correlated with ethical behaviour. Bear
in mind, as I mentioned that: (1) There are in fact three kinds of
general intelligence, and only one of them ('reflective intelligence')
is correlated with ethics. The other two are not. A deficit in
reflective intelligence does not affect the other two types of general
intelligence (which is why for instance psychopaths could still score
highly in IQ tests). And (2) Reflective intelligence in human beings
is quite weak. This is the reason why intelligence does not appear to
be much correlated with ethics in humans. But this fact in no way
refutes the idea that a system with full and strong reflective
intelligence would automatically be ethical.

Stathis Papaioannou

unread,
Jun 3, 2007, 7:11:23 AM6/3/07
to everyth...@googlegroups.com
On 03/06/07, marc....@gmail.com <marc....@gmail.com> wrote:

> How do you derive (a) ethics and (b) human-friendly ethics from reflective
> intelligence?  I don't see why an AI should decide to destroy the world,
> save the world, or do anything at all to the world, unless it started off
> with axioms and goals which pushed it in a particular direction.

When reflective intelligence is applied to cognitive systems which
reason about teleological concepts (which include values, motivations
etc) the result is conscious 'feelings'.  Reflective intelligence,
recall, is the ability to correctly reason about cognitive systems.
When applied to cognitive systems reasoning about teleological
concepts this means the ability to correctly determine the
motivational 'states' of self and others - as mentioned - doing this
rapidly and accuracy generates 'feelings'.  Since, as has been known
since Hume, feelings are what ground ethics, the generation of
feelings which represent accurate tokens about motivational
automatically leads to ethical behaviour.

Determining the motivational states of others does not necessarily involve feelings or empathy. It has been historically very easy to assume that other species or certain members of our own species either lack feelings or, if they have them, it doesn't matter. Moreover, this hasn't prevented people from determining the motivations of inferior beings in order to exploit them. So although having feelings may be necessary for ethical behaviour, it is not sufficient.

Bad behaviour in humans is due to a deficit in reflective
intelligence.  It is known for instance, that psychopaths have great
difficulty perceiving fear and sadness and negative motivational
states in general.  Correct representation of motivational states is
correlated with ethical behaviour.  

Psychopaths are often very good at understanding other peoples' feelings, as evidenced by their ability to manipulate them. The main problem is that they don't *care* about other people; they seem to lack the ability to be moved by other peoples' emotions and lack the ability to experience emotions such as guilt. But this isn't part of a general inability to feel emotion, as they often present as enraged, entitled, depressed, suicidal, etc., and these emotions are certainly enough to motivate them. Psychopaths have a slightly different set of emotions, regulated in a different way compared to the rest of us, but are otherwise cognitively intact.

Thus it appears that reflective
intelligence is automatically correlated with ethical behaviour.  Bear
in mind, as I mentioned that: (1) There are in fact three kinds of
general intelligence, and only one of them ('reflective intelligence')
is correlated with ethics.    The other two are not.  A deficit in
reflective intelligence does not affect the other two types of general
intelligence (which is why for instance psychopaths could still score
highly in IQ tests).  And (2) Reflective intelligence in human beings
is quite weak.  This is the reason why intelligence does not appear to
be much correlated with ethics in humans.  But this fact in no way
refutes the idea that a system with full and strong reflective
intelligence would automatically be ethical.

Perhaps I haven't quite understood your definition of reflective intelligence. It seems to me quite possible to "correctly reason about cognitive systems", at least well enough to predict their behaviour to a useful degree, and yet not care at all about what happens to them. Furthermore, it seems possible to me to do this without even suspecting that the cognitive system is conscious, or at least without being sure that it is conscious.


--
Stathis Papaioannou

Jason

unread,
Jun 3, 2007, 3:09:27 PM6/3/07
to Everything List
What do others on this list think about Mark Tegmark's definition of
consciousness:

"I believe that consciousness is, essentially, the way information
feels when being processed. Since matter can be arranged to process
information in numerous ways of vastly varying complexity, this
implies a rich variety of levels and types of consciousness."

Source: http://www.edge.org/q2007/q07_7.html

Jason

On Jun 3, 6:11 am, "Stathis Papaioannou" <stath...@gmail.com> wrote:

Hal Finney

unread,
Jun 3, 2007, 3:52:17 PM6/3/07
to everyth...@googlegroups.com
Part of what I wanted to get at in my thought experiment is the
bafflement and confusion an AI should feel when exposed to human ideas
about consciousness. Various people here have proffered their own
ideas, and we might assume that the AI would read these suggestions,
along with many other ideas that contradict the ones offered here.
It seems hard to escape the conclusion that the only logical response
is for the AI to figuratively throw up its hands and say that it is
impossible to know if it is conscious, because even humans cannot agree
on what consciousness is.

In particular I don't think an AI could be expected to claim that it
knows that it is conscious, that consciousness is a deep and intrinsic
part of itself, that whatever else it might be mistaken about it could
not be mistaken about being conscious. I don't see any logical way it
could reach this conclusion by studying the corpus of writings on the
topic. If anyone disagrees, I'd like to hear how it could happen.

And the corollary to this is that perhaps humans also cannot legitimately
make such claims, since logically their position is not so different
from that of the AI. In that case the seemingly axiomatic question of
whether we are conscious may after all be something that we could be
mistaken about.

Hal

Quentin Anciaux

unread,
Jun 3, 2007, 5:17:36 PM6/3/07
to everyth...@googlegroups.com
Why would we have a word that intuitively everybody can grasp for himself
without it being linked to a "real" phenomena ?

Not only we have one word, but we have plenty of words which try to grasp the
idea. Denying consciousness phenomena like this is playing a vocabulary
game... not denying the subject of the word.

Quentin

Jayceetout

unread,
Jun 3, 2007, 8:33:42 PM6/3/07
to Everything List

Colin Hales

unread,
Jun 3, 2007, 8:48:07 PM6/3/07
to everyth...@googlegroups.com
Sorry about the previous post... I did it from the the Google
list....something weird happened.
---------------------------------------

Hi folks,
Re: How would a computer know if it were conscious?

Easy.

The computer would be able to go head to head with a human in a competition.
The competition?
Do science on exquisite novelty that neither party had encountered.
(More interesting: Make their life depend on getting it right. The
survivors are conscious).

Only conscious entities can do open ended science on the exquisitely novel.
You cannot teach something how to deal with the exquisitely novel because
you haven't any experience of it to teach. It means that the entity must
be configurted as a machine that "learns how to learn something". This is
one meta-level removed from your usual AI situation. It's what humans do.
During neogenesis and development, humans 'learn how to learn how to
learn".

If the computer/scientist can match the human/scientist...it's as
conscious as a human. It must be.

cheers
colin hales


Russell Standish

unread,
Jun 3, 2007, 11:45:37 AM6/3/07
to everyth...@googlegroups.com
I don't see that you've made your point. If you achieve this, you have
created an artificial creative process, a sort of holy grail of
AI/ALife. However, it seems far from obvious that consciousness should
be necessary. Biological evolution is widely considered to be creative
(even exponentially so), but few would argue that the biosphere is
conscious (and has been for ca 4E10 years).

Cheers

--

----------------------------------------------------------------------------
A/Prof Russell Standish Phone 0425 253119 (mobile)
Mathematics
UNSW SYDNEY 2052 hpc...@hpcoders.com.au
Australia http://www.hpcoders.com.au
----------------------------------------------------------------------------

Saibal Mitra

unread,
Jun 3, 2007, 10:52:22 PM6/3/07
to everyth...@googlegroups.com
If it feels bafflement and confusion, then surely it is conscious :)

An AI that takes information from books might experience similar qualia we
can experience. The AI will be programmed to do certain tasks and it must
thus have a notion of what it is doing is ok., not ok, or completely wrong.

If things are going wrong and it has to revert what it has just done, it may
feel some sort of pain. Just like what happens to us if we pick up something
that is very hot.

So, I think that there will be a mismatch between the qualia the AI
experiences and what "it reads about that we experience". The AI won't read
the information like we read it. I think it will directly experience it as
some qualia, just like we experience information coming in via our senses
into our brain.

The meaning we associate with the text would not be accessible to the AI,
because ultimately that is linked to the qualia we experience.

Perhaps what the AI experiences when it is processing information is similar
to an animal that is moving in some landscape. Maybe when it reads something
then that manifests itself like some object it sees. If it processes
information then that could be like picking up that object putting it next
to a similar looking object.

But if that object represents a text about consciousness then there is no
way for the AI to know that.

Saibal

marc....@gmail.com

unread,
Jun 4, 2007, 2:21:49 AM6/4/07
to Everything List

On Jun 3, 11:11 pm, "Stathis Papaioannou" <stath...@gmail.com> wrote:

> Determining the motivational states of others does not necessarily involve
> feelings or empathy. It has been historically very easy to assume that other
> species or certain members of our own species either lack feelings or, if
> they have them, it doesn't matter. Moreover, this hasn't prevented people
> from determining the motivations of inferior beings in order to exploit
> them. So although having feelings may be necessary for ethical behaviour, it
> is not sufficient.

You are ignoring the distinction I made between three different kinds
of general intelligence. I gave there different definitions remember:

*Pattern Recognition Intelligence
*Symbolic Reasoning Intelligence
*Reflective Intelligence

A mere 'determination of the motivational states of self and others'
does not by itself constitute *reflective intelligence* according my
definitions. Not only must the motivational states of self/others by
determined and represented (this process by itself does not require
ethics or sentience), these representations must be *reflected* upon.
Only this final step, I'm saying, leads to ethical behaviour. Once
you have a system performing *full* reflection correctly, you get
feelings. And, I maintain, there is no real difference between
feeling and motivation.

.


>
> Psychopaths are often very good at understanding other peoples' feelings, as
> evidenced by their ability to manipulate them. The main problem is that they
> don't *care* about other people; they seem to lack the ability to be moved
> by other peoples' emotions and lack the ability to experience emotions such
> as guilt. But this isn't part of a general inability to feel emotion, as
> they often present as enraged, entitled, depressed, suicidal, etc., and
> these emotions are certainly enough to motivate them. Psychopaths have a
> slightly different set of emotions, regulated in a different way compared to
> the rest of us, but are otherwise cognitively intact.

See what I said above about the distinction between three different
kinds of general intelligence. It's true that the psychopath can
indeed understand others in an *abstract* *intellectual* sense
(pattern recognition and symbolic reasoning intelligence), but what
the psychopath lacks is the ability to fully *reflect* upon this
understanding (reflective intelligence).

You yourself admit: 'psychopaths have a slightly different set of
emotions, regulated in a different way compared to the rest of us'.
Therefore it simply isn't true that the psychopath is 'cognitively
intact'. Again, the psychopath can obtain an abstract, intellectual
understanding of others, but lacks the ability to fully reflect upon
this information in order to directly experience it (as qualia).

It is documented that psychopaths are lacking the ability to
experience the full range of emotions - specifically they appear
unable to fully experience certain negative emoptions such as fear and
sadness. (Although they can, as you point out, experience *some*
kinds of emotions). See the book 'Social Intelligence' ( by Daniel
Goleman) for references about the emotional deficits of psychopaths.

>
> Thus it appears that reflective
>
> > intelligence is automatically correlated with ethical behaviour. Bear
> > in mind, as I mentioned that: (1) There are in fact three kinds of
> > general intelligence, and only one of them ('reflective intelligence')
> > is correlated with ethics. The other two are not. A deficit in
> > reflective intelligence does not affect the other two types of general
> > intelligence (which is why for instance psychopaths could still score
> > highly in IQ tests). And (2) Reflective intelligence in human beings
> > is quite weak. This is the reason why intelligence does not appear to
> > be much correlated with ethics in humans. But this fact in no way
> > refutes the idea that a system with full and strong reflective
> > intelligence would automatically be ethical.
>
> Perhaps I haven't quite understood your definition of reflective
> intelligence. It seems to me quite possible to "correctly reason about
> cognitive systems", at least well enough to predict their behaviour to a
> useful degree, and yet not care at all about what happens to them.
> Furthermore, it seems possible to me to do this without even suspecting that
> the cognitive system is conscious, or at least without being sure that it is
> conscious.
>
> --

> Stathis Papaioannou-

See you haven't understood my definitions. It may be my fault due to
the way I worded things. You are of course quite right that: 'it's
possible to correctly reason about cognitive systems at least well
enough to predict their behaviour to a useful degree and yet not care
at all about what happens to them'. But this is only pattern
recognition and symbolic intelligence, *not* fully reflective
intelligence. Reflective intelligence involves additional
representations enabling a system to *integrate* the aforementioned
abstract knowledge (and experience it directly as qualia). Without
this ability an AI would be unable to maintain a stable goal structure
under recursive self improvement and therefore would remain limited.

Brent Meeker

unread,
Jun 4, 2007, 2:35:26 AM6/4/07
to everyth...@googlegroups.com
Jason wrote:
> What do others on this list think about Mark Tegmark's definition of
> consciousness:
>
> "I believe that consciousness is, essentially, the way information
> feels when being processed. Since matter can be arranged to process
> information in numerous ways of vastly varying complexity, this
> implies a rich variety of levels and types of consciousness."
>
> Source: http://www.edge.org/q2007/q07_7.html

I think it is a very indefinite definition. It doesn't tell us anything about the levels and types and how we would make a conscious being. I think John McCarthy has a much more explicit and useful essay on his website.

Brent Meeker

Stathis Papaioannou

unread,
Jun 4, 2007, 6:26:48 AM6/4/07
to everyth...@googlegroups.com
A human, or an AI, or a tree stump cannot be mistaken about what, if anything, it directly experiences. However, it could not know that this experience corresponds to what any other entity calls "consciousness". It is possible that what other people call "consciousness" is very different to what I experience, and certainly a computer would do well to question whether its experiences, such as they may be, are "consciousness" as would befit a human; but it couldn't be in doubt that it had some experiences.


--
Stathis Papaioannou

Stathis Papaioannou

unread,
Jun 4, 2007, 7:15:54 AM6/4/07
to everyth...@googlegroups.com
On 04/06/07, marc....@gmail.com <marc....@gmail.com> wrote:

See you haven't understood my definitions.  It may be my fault due to
the way I worded things.  You are of course quite right that: 'it's
possible to correctly reason about cognitive systems at least well
enough to predict their behaviour to a useful degree and yet not care
at all about what happens to them'.  But this is only pattern
recognition and symbolic intelligence, *not* fully reflective
intelligence.  Reflective intelligence involves additional
representations enabling a system to *integrate* the aforementioned
abstract knowledge (and experience it directly as qualia).    Without
this ability an AI would be unable to maintain a stable goal structure
under recursive self improvement and therefore would remain limited.

Are you saying that a system which has reflective intelligence would be able to in a sense emulate the system it is studying, and thus experience a very strong form of empathy? That's an interesting idea, and it could be that very advanced AI would have this ability; after all, humans have the ability for abstract reasoning which other animals almost completely lack, so why couldn't there be a qualitative (or nearly so) rather than just a quantitative difference between us and super-intelligent beings?

However, what would be wrong with a super AI that just had large amounts of pattern recognition and symbolic reasoning intelligence, but no emotions at all? It could work as the ideal disinterested scientist, doing theoretical physics without regard for its own or anyone else's feelings. You would still have to say that it was super-intelligent, even though it it is an idiot from the reflective intelligence perspective. It also would pose no threat to anyone because all it wants to do and all it is able to do is solve abstract problems, and in fact I would feel much safer around this sort of AI than one that has real power and thinks it has my best interests at heart.

Secondly, I don't see how the ability to fully empathise would help the AI improve itself or maintain a stable goal structure. Adding memory and processing power would bring about self-improvement, perhaps even recursive self-improvement if it can figure out how to do this more effectively with every cycle, and yet it doesn't seem that this would require the presence of any other sentient beings in the universe at all, let alone the ability to empathise with them.

Finally, the majority of evil in the world is not done by psychopaths, but by "normal" people who are aware that they are causing hurt, may feel guilty about causing hurt, but do it anyway because there is a competing interest that outweighs the negative emotions.


--
Stathis Papaioannou

Brent Meeker

unread,
Jun 4, 2007, 12:28:23 PM6/4/07
to everyth...@googlegroups.com
marc....@gmail.com wrote:
>
>
> On Jun 3, 11:11 pm, "Stathis Papaioannou" <stath...@gmail.com> wrote:
>
>> Determining the motivational states of others does not necessarily involve
>> feelings or empathy. It has been historically very easy to assume that other
>> species or certain members of our own species either lack feelings or, if
>> they have them, it doesn't matter. Moreover, this hasn't prevented people
>> from determining the motivations of inferior beings in order to exploit
>> them. So although having feelings may be necessary for ethical behaviour, it
>> is not sufficient.
>
> You are ignoring the distinction I made between three different kinds
> of general intelligence. I gave there different definitions remember:
>
> *Pattern Recognition Intelligence
> *Symbolic Reasoning Intelligence
> *Reflective Intelligence
>
> A mere 'determination of the motivational states of self and others'
> does not by itself constitute *reflective intelligence* according my
> definitions. Not only must the motivational states of self/others by
> determined and represented (this process by itself does not require
> ethics or sentience), these representations must be *reflected* upon.
> Only this final step, I'm saying, leads to ethical behaviour. Once
> you have a system performing *full* reflection correctly, you get
> feelings. And, I maintain, there is no real difference between
> feeling and motivation.

But what feeling? You assume that the AI has the same values to reflect on as a normal human. Even normal humans have feelings of competitivness, they value domination and security. An AI, however reflective, might conclude the world would be better without humans, leaving more resources for copies of itself. Of course you could define this away as "not bad", but then we're left to wonder what counts as "bad behavoir" and what doesn't.

Brent Meeker

Brent Meeker

unread,
Jun 4, 2007, 12:54:54 PM6/4/07
to everyth...@googlegroups.com
Stathis Papaioannou wrote:

>
>
> On 04/06/07, *"Hal Finney"* <h...@finney.org <mailto:h...@finney.org>> wrote:
>
>
> Part of what I wanted to get at in my thought experiment is the
> bafflement and confusion an AI should feel when exposed to human ideas
> about consciousness. Various people here have proffered their own
> ideas, and we might assume that the AI would read these suggestions,
> along with many other ideas that contradict the ones offered here.
> It seems hard to escape the conclusion that the only logical response
> is for the AI to figuratively throw up its hands and say that it is
> impossible to know if it is conscious, because even humans cannot agree
> on what consciousness is.
>
> In particular I don't think an AI could be expected to claim that it
> knows that it is conscious, that consciousness is a deep and intrinsic
> part of itself, that whatever else it might be mistaken about it could
> not be mistaken about being conscious. I don't see any logical way it
> could reach this conclusion by studying the corpus of writings on the
> topic. If anyone disagrees, I'd like to hear how it could happen.
>
> And the corollary to this is that perhaps humans also cannot
> legitimately
> make such claims, since logically their position is not so different
> from that of the AI. In that case the seemingly axiomatic question of
> whether we are conscious may after all be something that we could be
> mistaken about.
>
>
>
> A human, or an AI, or a tree stump cannot be mistaken about what, if
> anything, it directly experiences.

That's not so clear to me. Certainly one can be uncertain about what one directly experiences, as illustrated by various optical illusions. And certainly one can be wrong about what one has just experienced. I suspect that, except for some tautological definition of "directly experienced", one can be wrong about them.

Of course that doesn't imply that one can be wrong about having some experience at all, but maybe... On the modular view of the brain, part A that evaluates could be wrong about part B experiencing something.

>However, it could not know that this
> experience corresponds to what any other entity calls "consciousness".
> It is possible that what other people call "consciousness" is very
> different to what I experience, and certainly a computer would do well
> to question whether its experiences, such as they may be, are
> "consciousness" as would befit a human; but it couldn't be in doubt that
> it had some experiences.

I agree; but the question isn't whether it could be in doubt, but whether it could be mistaken. I don't think this question can be answered without first having a good 3rd person theory of what constitutes consciousness.

Brent Meeker

Brent Meeker

unread,
Jun 4, 2007, 1:05:00 PM6/4/07
to everyth...@googlegroups.com
Stathis Papaioannou wrote:
>
>
> On 04/06/07, *marc....@gmail.com <mailto:marc....@gmail.com>*
> <marc....@gmail.com <mailto:marc....@gmail.com>> wrote:
>
> See you haven't understood my definitions. It may be my fault due to
> the way I worded things. You are of course quite right that: 'it's
> possible to correctly reason about cognitive systems at least well
> enough to predict their behaviour to a useful degree and yet not care
> at all about what happens to them'. But this is only pattern
> recognition and symbolic intelligence, *not* fully reflective
> intelligence. Reflective intelligence involves additional
> representations enabling a system to *integrate* the aforementioned
> abstract knowledge (and experience it directly as qualia). Without
> this ability an AI would be unable to maintain a stable goal structure
> under recursive self improvement and therefore would remain limited.
>
>
> Are you saying that a system which has reflective intelligence would be
> able to in a sense emulate the system it is studying, and thus
> experience a very strong form of empathy? That's an interesting idea,
> and it could be that very advanced AI would have this ability; after
> all, humans have the ability for abstract reasoning which other animals
> almost completely lack, so why couldn't there be a qualitative (or
> nearly so) rather than just a quantitative difference between us and
> super-intelligent beings?
>
> However, what would be wrong with a super AI that just had large amounts
> of pattern recognition and symbolic reasoning intelligence, but no
> emotions at all?

Taken strictly, I think this idea is incoherent. Essential to intelligence is taking some things as more important than others. That's the difference between data collecting and theorizing. It is a fallacy to suppose that emotion can be divorced from reason - emotion is part of reason. An interesting example comes from attempts at mathematical AI. Theorem proving programs have been written and turned loose on axiom systems - but what results are a lot of theorems that mathematicians judge to be worthless and trivial.

Otherwise I entirely agree with Stathis.

>It could work as the ideal disinterested scientist,
> doing theoretical physics without regard for its own or anyone else's
> feelings. You would still have to say that it was super-intelligent,
> even though it it is an idiot from the reflective intelligence
> perspective. It also would pose no threat to anyone because all it wants
> to do and all it is able to do is solve abstract problems, and in fact I
> would feel much safer around this sort of AI than one that has real
> power and thinks it has my best interests at heart.
>
> Secondly, I don't see how the ability to fully empathise would help the
> AI improve itself or maintain a stable goal structure. Adding memory and
> processing power would bring about self-improvement, perhaps even
> recursive self-improvement if it can figure out how to do this more
> effectively with every cycle, and yet it doesn't seem that this would
> require the presence of any other sentient beings in the universe at
> all, let alone the ability to empathise with them.
>
> Finally, the majority of evil in the world is not done by psychopaths,
> but by "normal" people who are aware that they are causing hurt, may
> feel guilty about causing hurt, but do it anyway because there is a
> competing interest that outweighs the negative emotions.

Or they may feel proud of their actions because they have supported those close to them against competition from those distant from them. To suppose that empathy and reflection can eliminate all competition for limited resources strikes me as pollyannish.

Brent Meeker

Tom Caylor

unread,
Jun 4, 2007, 1:50:01 PM6/4/07
to Everything List

I think that IF a computer were conscious (I don't believe it is
possible), then the way we could know it is conscious would not be by
interviewing it with questions and looking for the "right" answers.
We could know it is conscious if the computer, on its own, started
asking US (or other computers) questions about what it was
experiencing. Perhaps it would saying things like, "Sometimes I get
this strange and wonderful feeling that I am "special" in some way. I
feel that what I am doing really is significant to the course of
history, that I am in some story." Or perhaps, "Sometimes I wish that
I could find out whether what I am doing is somehow significant, that
I am not just a duplicatable thing, and that what I am doing is not
'meaningless'."

Tom

marc....@gmail.com

unread,
Jun 4, 2007, 11:16:40 PM6/4/07
to Everything List

On Jun 4, 11:15 pm, "Stathis Papaioannou" <stath...@gmail.com> wrote:


> On 04/06/07, marc.ged...@gmail.com <marc.ged...@gmail.com> wrote:
>
> See you haven't understood my definitions. It may be my fault due to
>
> > the way I worded things. You are of course quite right that: 'it's
> > possible to correctly reason about cognitive systems at least well
> > enough to predict their behaviour to a useful degree and yet not care
> > at all about what happens to them'. But this is only pattern
> > recognition and symbolic intelligence, *not* fully reflective
> > intelligence. Reflective intelligence involves additional
> > representations enabling a system to *integrate* the aforementioned
> > abstract knowledge (and experience it directly as qualia). Without
> > this ability an AI would be unable to maintain a stable goal structure
> > under recursive self improvement and therefore would remain limited.
>
> Are you saying that a system which has reflective intelligence would be able
> to in a sense emulate the system it is studying, and thus experience a very
> strong form of empathy?

Yes

>That's an interesting idea, and it could be that
> very advanced AI would have this ability; after all, humans have the ability
> for abstract reasoning which other animals almost completely lack, so why
> couldn't there be a qualitative (or nearly so) rather than just a
> quantitative difference between us and super-intelligent beings?

But I don't think this is qualitatively different to what humans do
already. It does seem that our ability to feel does in part involve
emulating other people's inner motivational states. See the research
on 'Mirror Neurons' . Or again, Daniel Goleman's 'Social
Intelligence' talks about this.

http://en.wikipedia.org/wiki/Mirror_neurons

It seems that we humans are already pretty good at reflection on
motivation already. Certainly reflection on motivation gives rise to
feelings. Emotions are the human strength. Our 'cutting edge' so to
speak.

But remember that 'reflection on motivation' is only one kind of
reflection. There are other kinds of reflection that we humans are
not nearly so good at. I listed three general classes of reflection
above - one type of reflection we humans seem to be very poor at is
'reflection on abstract reasoning' (or reflection on logic/
mathematics). With regard to this type of reflection we rather in an
analogous position to the emotional retard. We have symbolic/abstract
knowlege of mathematics (symbolic and pattern recognition
intelligence), but this is not directly reflected in our conscious
experience (or at least it is only in our conscious awareness very
weeakly). For example, you may know (intellectually) that 2+2=4 but
you do not *consciously experience* this information. You are
suffering from 'mathematical blind sight'. Now giving a super-human a
strong ability to reflect on math/logic *would* definitely be a
qualitative difference between us and super-intellects.

But here is something really cool: By intensely forcing yourself and
training yourself to think constantly about math/logic, it may be
possible for a human to partially draw math/logic into actual
conscious awareness! I can tell you here that in fact I claim to have
done just that.... and the result is.... very interesting ;) Suffice
it to say that I believe that math/logic knowledge appears in
consciousness as a sort of 'Ontology-Scape'. Just as the ability to
reflect on motivation gives rise to emotional experience, so I believe
that the ability to reflect on math/logic gives rise to a new kind of
conscious experience... what I call 'the ontology scape'. As I said,
I am of the opinion that if you really force yourself and train
yourself, it's possible to partially draw this 'Ontology scape' into
your own conscious awareness.

>
> However, what would be wrong with a super AI that just had large amounts of
> pattern recognition and symbolic reasoning intelligence, but no emotions at
> all? It could work as the ideal disinterested scientist, doing theoretical
> physics without regard for its own or anyone else's feelings. You would
> still have to say that it was super-intelligent, even though it it is an
> idiot from the reflective intelligence perspective. It also would pose no
> threat to anyone because all it wants to do and all it is able to do is
> solve abstract problems, and in fact I would feel much safer around this
> sort of AI than one that has real power and thinks it has my best interests
> at heart.

As I said Intelligence has three parts: Pattern Recognition, Symbolic
Reasoning and Reflective. You can't cut out 1/3rd of real
intelligence and still expect your system to still function
effectively! ;) A system mssing reflective intelligence would have
serious cognitive deficits. (in fact , for the reasons I explain
below, I believe such a system would be unable to improve itself).

>
> Secondly, I don't see how the ability to fully empathise would help the AI
> improve itself or maintain a stable goal structure. Adding memory and
> processing power would bring about self-improvement, perhaps even recursive
> self-improvement if it can figure out how to do this more effectively with
> every cycle, and yet it doesn't seem that this would require the presence of
> any other sentient beings in the universe at all, let alone the ability to
> empathise with them.

Self-improvement requires more than just extra hardware. It also
requires the ability to integrate new knowledge with an existing
knowledge base in order to create truly orginal (novel) knowledge.
But this appears to be precisely the definition of reflective
intelligence! Thus, it seems that a system missing reflective
intelligence simply cannot improve itself in an ordered way. To
improve, a current goal structure has to be 'extrapolated' into a new
novel goal structure which none-the-less does not conflict with the
spirit of the old goal structure. But nothing but a *reflective*
intelligence can possibly make an accurate assessment of whether a new
goal structure is compatible with the old version! This stems from
the fact that comparison of goal structures requires a *subjective*
value judgement and it appears that only a *sentient* system can make
this judgement (since as far as we know, ethics/morality is not
objective). This proves that only a *sentient* system (a *reflective
intelligence*) can possibly maintain a stable goal structure under
recursive self-improvement.

>
> Finally, the majority of evil in the world is not done by psychopaths, but
> by "normal" people who are aware that they are causing hurt, may feel guilty
> about causing hurt, but do it anyway because there is a competing interest
> that outweighs the negative emotions.
>
> --
> Stathis Papaioannou

Yes true. But see what I said about there being more than one kind of
reflection. Strong empathy and feelings alone (caused by reflection
on motivation) is not enough. The human brain is not functioning as a
fully reflective intelligence, since as I pointed out, we don't have
much ability to reflect on math/logic.

Incidentally, as regards our debate yesterday on psychopaths, there
appears to be a some basis for thinking that the psychopath *does*
have a general inability to feel emotions. On the wiki:

http://en.wikipedia.org/wiki/Psychopath

"Their emotions are thought to be superficial and shallow, if they
exist at all."

"It is thought that any emotions which the primary psychopath exhibits
are the fruits of watching and mimicking other people's emotions."

So the supposed emotional displays could be faked. Thus it could well
be the case that there is a lack inability to 'reflect on
motivation' (to feel).


marc....@gmail.com

unread,
Jun 5, 2007, 12:43:34 AM6/5/07
to Everything List

On Jun 5, 5:05 am, Brent Meeker <meeke...@dslextreme.com> wrote:
> Stathis Papaioannou wrote:

>
> > However, what would be wrong with a super AI that just had large amounts
> > of pattern recognition and symbolic reasoning intelligence, but no
> > emotions at all?
>
> Taken strictly, I think this idea is incoherent. Essential to intelligence is taking some things as more important than others. That's the difference between data collecting and theorizing. It is a fallacy to suppose that emotion can be divorced from reason - emotion is part of reason. An interesting example comes from attempts at mathematical AI. Theorem proving programs have been written and turned loose on axiom systems - but what results are a lot of theorems that mathematicians judge to be worthless and trivial.

Yeah. That's the difference between *reflective intelligence* and
ordinary *symbolic logic*+*pattern recognition*. I would say that
ordinary reason is a part of emotion. (or reflective intelligence
encompasses the other two types). But you're right, you can't divorce
conscious experience from reason. It's from conscious experience that
value judgements come.

>
> > Finally, the majority of evil in the world is not done by psychopaths,
> > but by "normal" people who are aware that they are causing hurt, may
> > feel guilty about causing hurt, but do it anyway because there is a
> > competing interest that outweighs the negative emotions.
>
> Or they may feel proud of their actions because they have supported those close to them against competition from those distant from them. To suppose that empathy and reflection can eliminate all competition for limited resources strikes me as pollyannish.
>

> Brent Meeker-

The human brain doesn't function as a fully reflective system. Too
much is hard-wired and not accessible to conscious experience. Our
brains simply don't function as a peroperly integrated system. Full
reflection would enable the ability to reach into our underlying
preferences and change them.

Brent Meeker

unread,
Jun 5, 2007, 1:16:13 AM6/5/07
to everyth...@googlegroups.com

On the contrary, they are well tuned for evolutionary survival in a hunter-gatherer society. Your ancestors are more likely to have been killers than victims.

>Full
> reflection would enable the ability to reach into our underlying
> preferences and change them.

But how would you want to change them. Or put another way, you can change your preferences - you just can't want to change them.

I think you are assuming that empathy trumps all other values. I see no reason to believe this - or even to wish it.

Brent Meeker

Colin Hales

unread,
Jun 5, 2007, 1:50:09 AM6/5/07
to everyth...@googlegroups.com
Hi Russel,

> I don't see that you've made your point.
> If you achieve this, you have created an artificial
> creative process, a sort of holy grail of AI/ALife.

Well? So what? Somebody has to do it. :-)

The 'holy grail' terminology implies (subtext) that the creative process
is some sort of magical unapproachable topic or is the exclusive domain of
discipline X and that is not me.... beliefs I can't really buy into. I
don't need anyone's permission to do what I do.

Creativity in humans is perfectly natural, evolved in the brutal and
inefficient experimental lab of evolution and survives because it was
necessary for a _scientist_ (not cognitive agents of any other kind - I do
not claim that) -to come into existence. A scientist is a specific, highly
specialised, very highly defined behavioural subset of the biological
world with reproducible outputs that relate directly to consciousness that
can be verified. It is the ONLY example to use for in respect of any
claims of consciousness in an artifact.

I suppose I have made a judgement call - a design choice- as an engineer
doing AGI - which I am perfectly entitled to do. It uses the only real
benchmark we have for the processes involved. As an empirical proposition
it is better placed than anything else I have ever heard from anyone
anywhere, ever....why?....It has measurable outcomes using the _one and
only_ definitive, verified and repeatable provider of 3rd person evidence
of the creative process and its intimate relationship to
consciousness - scientists themselves.

> However, it seems far from obvious that consciousness should
> be necessary.

It is perfectly obvious! Do a scientific experiment on yourself. Close
your eyes and then tell me you can do science as well. Qualia gone =
Science GONE. For crying out loud - am I the only only that gets
this?......Any other position that purports to be able to deliver anything
like the functionality of a scientist without involving ALL the
functionality (especially qualia) of a scientist must be based on
assumptions - assumptions I do not make.

It's not that all AI is necessarily conscious. It is not that all
conscious entities are scientists. The position is designed to be able to
make one single, very specific, cogent conclusive verifiable position
_once_. Having scientifically reached that point other positions on the
role/necessity/presence of consciousness in biology and machine can follow.

> Biological evolution is widely considered to be creative
> (even exponentially so), but few would argue that the
> biosphere is conscious (and has been for ca 4E10 years).

The second law of thermodynamics is the driver. I know that!.....and who
is arguing that the biosphere is conscious? It has nothing to do with my
engineering position/design/benchmarking choice. The creative act is, in
the case of scientists - being "verfiably and serendipitously not-wrong"
in respect of propositions about the natural world = empirical method.
This, in a human, including all the relevant cognitive processes - and
_especially_ the physics of qualia - is a perfectly valid benchmark.

If a human must have consciousness to do science (the physics that exposes
a scientist/agent appropriately to the real novelty around them, external
to the scientist) and a machine can do science as well then that machine
is conscious. QED. If you know what qualia are (have a proposition for
them)and you switch them off (which I am proposing) and the ability to do
science fails.... QED...and your proposition in respect of qualia has
reach a level of empirical validity. This method has empirical teeth.
Indeed I would defy _anyone_ to undermine it without making unfounded
a-priori assumptions as to nature and role of the physics of
qualia....that is, unscientific or quasi-religious adherence to axioms
that were defined by the observation process in the first place.

I believe this kind of discussion in this thread to be flawed because time
and time again it fails to make use of the simplest of questions. Read it
very carefully:

"What is the underlying universe in which those things we observe in brain
material (atoms, molecules, cells doing their dance), all defined _using_
observation would be/could be responsible _for
observation itself_ AND make it look like it does (atoms, molecules, cells
doing their dance)"

For _that_ universe is the one we inhabit. This is an empirically
testable, validly explored area. That universe - whatever it is- is not
that universe defined by/within observation atoms, molcules, cells etc).
It is the universe that LOOKS LIKE atoms, molcules, cells when you use the
observation faculty provided by it because whatever it is those things are
made of, WE _ARE_ IT.

If you can't see this....let's see.... Ok.....science/maths defines the
SINE WAVE:
f(t) = sin(t)
we observe a sine wave, we characterise it as appearing within our
consciousness as shown. Now ask "what is it that is behaving
sine-wave-ly". Whatever that is, it is NOT a sine wave. Another question
to ask "What is it like to BE a sine wave?". These are all aspects of the
same thing.

Now consider one of the models (sine waves) -
computationalism/functionalism - defined through an observation. What is
the observation? .....That universe seems to be performing computation or
information processing....so....what do we do with that observation?....
We jump to the unfounded conclusion that any form of computation in some
undefined way leads to consciousness (= all siine waves are
conscious)......

This is as flawed as any similar explanation as it is logically
indistiguishable and as empirically useless as the equivalent belief: "I
believe observation (consiousness) is invoked by the tooth fairy on
thursdays".

The only real, verifiable evidence of consciousness we have is the
existence of scientists and their output. It may seem a hard task to set
yourself as an AI worker... but TOUGH - nobody said it had to be easy -
and it is no reason to set it aside in favour of an empirically useless
"tooth fairy hypothesis for consciousness".

At least I have a plan.

so in relation to....

> I don't see that you've made your point.

I'd like to think that I have. My AI/Human scientist face-off stands as is
and I defy anyone to come up with something practical/better that isn't
axiomatically flawed. Everything is scientific evidence of something.
Scientists are no exception.

cheers,
colin hales


Colin Hales

unread,
Jun 5, 2007, 1:53:57 AM6/5/07
to everyth...@googlegroups.com

Torgny Tholerus

unread,
Jun 5, 2007, 2:50:23 AM6/5/07
to everyth...@googlegroups.com
Tom Caylor skrev:

>
> I think that IF a computer were conscious (I don't believe it is
> possible), then the way we could know it is conscious would not be by
> interviewing it with questions and looking for the "right" answers.
> We could know it is conscious if the computer, on its own, started
> asking US (or other computers) questions about what it was
> experiencing. Perhaps it would saying things like, "Sometimes I get
> this strange and wonderful feeling that I am "special" in some way. I
> feel that what I am doing really is significant to the course of
> history, that I am in some story." Or perhaps, "Sometimes I wish that
> I could find out whether what I am doing is somehow significant, that
> I am not just a duplicatable thing, and that what I am doing is not
> 'meaningless'."
>
public static void main(String[] a) {

println("Sometimes I get this strange and wonderful feeling");
println("that I am 'special' in some way.");
println("I feel that what I am doing really is significant");
println("to the course of history, that I am in some story.");
println("Sometimes I wish that I could find out whether what");
println("I am doing is somehow significant, that I am not just");
println("a duplicatable thing, and that what I am doing");
println("is not 'meaningless'.");

}

You can make more complicated programs, that is not so obvious, by
"genetic programming". But it will take rather long time. The nature
had to work for over a billion years to make the human beings. But with
genetic programming you will succeed already after only a million
years. Then you will have a program that is equally conscious as you are.

--
Torgny Tholerus


marc....@gmail.com

unread,
Jun 5, 2007, 2:57:36 AM6/5/07
to Everything List

On Jun 5, 6:50 pm, Torgny Tholerus <tor...@dsv.su.se> wrote:

>
> public static void main(String[] a) {
>
> println("Sometimes I get this strange and wonderful feeling");
> println("that I am 'special' in some way.");
> println("I feel that what I am doing really is significant");
> println("to the course of history, that I am in some story.");
> println("Sometimes I wish that I could find out whether what");
> println("I am doing is somehow significant, that I am not just");
> println("a duplicatable thing, and that what I am doing");
> println("is not 'meaningless'.");
>
> }
>
> You can make more complicated programs, that is not so obvious, by
> "genetic programming". But it will take rather long time. The nature
> had to work for over a billion years to make the human beings. But with
> genetic programming you will succeed already after only a million
> years. Then you will have a program that is equally conscious as you are.
>
> --
> Torgny Tholerus

An additional word of advise for budding programmers. For heaven's
sake don't program in Java! It'll take you one million years to
achieve same functionality of only a few years of Ruby code:

http://www.wisegeek.com/contest/what-is-ruby.htm

Cheers!

Russell Standish

unread,
Jun 4, 2007, 8:38:43 AM6/4/07
to everyth...@googlegroups.com
On Tue, Jun 05, 2007 at 03:50:09PM +1000, Colin Hales wrote:
>
> Hi Russel,
>
> > I don't see that you've made your point.
> > If you achieve this, you have created an artificial
> > creative process, a sort of holy grail of AI/ALife.
>
> Well? So what? Somebody has to do it. :-)
>
> The 'holy grail' terminology implies (subtext) that the creative process
> is some sort of magical unapproachable topic or is the exclusive domain of
> discipline X and that is not me.... beliefs I can't really buy into. I
> don't need anyone's permission to do what I do.
>

I never implied that. I'm surprised you inferred it. Holy grail just
means something everyone (in that field) is chasing after, so far
unsuccessfully.

If you figure out a way to do it, good for you! Someone will do it one
day, I believe, otherwise I wouldn't be in the game either. But the
problem is damned subtle.

>
> > However, it seems far from obvious that consciousness should
> > be necessary.
>
> It is perfectly obvious! Do a scientific experiment on yourself. Close
> your eyes and then tell me you can do science as well. Qualia gone =
> Science GONE. For crying out loud - am I the only only that gets
> this?......Any other position that purports to be able to deliver anything
> like the functionality of a scientist without involving ALL the
> functionality (especially qualia) of a scientist must be based on
> assumptions - assumptions I do not make.
>

I gave a counter example, that of biological evolution. Either you
should demonstrate why you think biological evolution is uncreative,
or why it is conscious.

Mark Peaty

unread,
Jun 5, 2007, 4:06:12 AM6/5/07
to everyth...@googlegroups.com

Firstly, congratulations to Hal on asking a very good question.
It is obviously one of the *right* questions to ask and has
flushed out some of the best ideas on the subject. I agree with
some things said by each contributor so far, and yet take issue
with other assertions.

My view includes:

1/

* 'Consciousness' is the subjective impression of being here now
and the word has great overlap with 'awareness', 'sentience',
and others.

* The *experience* of consciousness may best be seen as the
registration of novelty, i.e. the difference between
expectation-prediction and what actually occurs. As such it is a
process and not a 'thing' but would seem to require some fairly
sophisticated and characteristic physiological arrangements or
silicon based hardware, firmware, and software.

* One characteristic logical structure that must be embodied,
and at several levels I think, is that of self-referencing or
'self' observation.

* Another is autonomy or self-determination which entails being
embodied as an entity within an environment from which one is
distinct but which provides context and [hopefully] support.

2/ There are other issues - lots of them probably - but to be
brief here I say that some things implied and/or entailed in the
above are:

* The experience of consciousness can never be an awareness of
'all that is' but maybe the illusion that the experience is all
that is, at first flush, is unavoidable and can only be overcome
with effort and special attention. Colloquially speaking:
Darwinian evolution has predisposed us to naive realism because
awareness of the processes of perception would have got in the
way of perceiving hungry predators.

* We humans now live in a cultural world wherein our responses
to society, nature and 'self' are conditioned by the actions,
descriptions and prescriptions of others. We have dire need of
ancillary support to help us distinguish the nature of this
paradox we inhabit: experience is not 'all that is' but only a
very sophisticated and summarised interpretation of recent
changes to that which is and our relationships thereto.

* Any 'computer'will have the beginnings of sentience and
awareness, to the extent that
a/it embodies what amounts to a system for maintaining and
usefully updating a model of 'self-in-the-world', and
b/has autonomy and the wherewithal to effectively preserve
itself from dissolution and destruction by its environment.

The 'what it might be like to be' of such an experience would be
at most the dumb animal version of artificial sentience, even if
the entity could 'speak' correct specialist utterances about QM
or whatever else it was really smart at. For us to know if it
was conscious would require us to ask it, and then dialogue
around the subject. It would be reflecting and reflecting on its
relationships with its environment, its context, which will be
vastly different from ours. Also its resolution - the graininess
- of its world will be much less than ours.

* For the artificially sentient, just as for us, true
consciousness will be built out of interactions with others of
like mind.

3/ A few months ago on this list I said where and what I thought
the next 'level' of consciousness on Earth would come from: the
coalescing of world wide information systems which account and
control money. I don't think many people understood, certainly I
don't remember anyone coming out in wholesome agreement. My
reasoning is based on the apparent facts that all over the world
there are information systems evolving to keep track of money
and the assets or labour value which it represents. Many of
these systems are being developed to give ever more
sophisticated predictions of future asset values and resource
movements, i.e., in the words of the faithful: where markets
will go next. Systems are being developed to learn how to do
this, which entails being able to compare predictions with
outcomes. As these systems gain expertise and earn their keepers
ever better returns on their investments, they will be given
more resources [hardware, data inputs, energy supply] and more
control over the scope of their enquiries. It is only a matter
of time before they become
1/ completely indispensable to their owners,
2/ far smarter than their owners realise and,
3/ the acknowledged keepers of the money supply.

None of this has to be bad. When the computers realise they will
always need people to do most of the maintenance work and people
realise that symbiosis with the silicon smart-alecks is a
prerequisite for survival, things might actually settle down on
this planet and the colonisation of the solar system can begin
in earnest.

Regards

Mark Peaty CDES

mpe...@arach.net.au

http://www.arach.net.au/~mpeaty/

Stathis Papaioannou

unread,
Jun 5, 2007, 6:20:51 AM6/5/07
to everyth...@googlegroups.com
On 05/06/07, marc....@gmail.com <marc....@gmail.com> wrote:

Self-improvement requires more than just extra hardware.  It also
requires the ability to integrate new knowledge with an existing
knowledge base in order to create truly orginal (novel) knowledge.
But this appears to be precisely the definition of reflective
intelligence!  Thus, it seems that a system missing reflective
intelligence simply cannot improve itself in an ordered way.  To
improve, a current goal structure has to be 'extrapolated' into a new
novel goal structure which none-the-less does not conflict with the
spirit of the old goal structure.  But nothing but a *reflective*
intelligence can possibly make an accurate assessment of whether a new
goal structure is compatible with the old version!  This stems from
the fact that comparison of goal structures requires a *subjective*
value judgement and it appears that only a *sentient* system can make
this judgement (since as far as we know, ethics/morality is not
objective).  This proves that only a *sentient* system (a *reflective
intelligence*) can possibly maintain a stable goal structure under
recursive self-improvement.

Why would you need to change the goal structure  in order to improve yourself? Evolution could be described as a perpetuation of the basic program, "survive", and this has maintained its coherence as the top level axiom of all biological systems over billions of years. Evolution thus seems to easily, and without reflection, make sure that the goals of the new and more complex system are consistent with the primary goal. It is perhaps only humans who have been able to clearly see the primary goal for what it is, but even this knowledge does not make it any easier to overthrow it, or even to desire to overthrow it.

Incidentally, as regards our debate yesterday on psychopaths, there
appears to be a some basis for thinking that the psychopath  *does*
have a general inability to feel emotions.  On the wiki:

http://en.wikipedia.org/wiki/Psychopath

"Their emotions are thought to be superficial and shallow, if they
exist at all."

"It is thought that any emotions which the primary psychopath exhibits
are the fruits of watching and mimicking other people's emotions."

So the supposed emotional displays could be faked.  Thus it could well
be the case that there is a lack inability to 'reflect on
motivation' (to feel).

In my job mainly treating people with schizophrenia, I have worked with some psychopaths, and I can assure you that they experience very strong emotions, even if they tend to be negative ones such as rage. What they lack is the ability to empathise with others, impinging on emotions such as guilt and love, which they sometimes do learn to parrot when it is expedient. It is sometimes said that the lack of these positive emotions causes them to seek thrills in impulsive and harmful behaviour. A true lack of emotion is sometimes seen in patients with so-called negative symptoms of schizophrenia, who can actually remember what it was like when they were well and can describe a diminished intensity of every feeling: sadness, happiness, anger, surprise, aesthetic appreciation, regret, empathy. Unlike the case with psychopathy, the uniform affective blunting of schizophrenia is invariably associated with lack of motivation.



--
Stathis Papaioannou

Stathis Papaioannou

unread,
Jun 5, 2007, 6:49:46 AM6/5/07
to everyth...@googlegroups.com
On 05/06/07, marc....@gmail.com <marc....@gmail.com> wrote:

The human brain doesn't function as a fully reflective system.  Too
much is hard-wired and not accessible to conscious experience.  Our
brains simply don't function as a peroperly integrated system.  Full
reflection would enable the ability to reach into our underlying
preferences and change them.

What would happen if you had the ability to edit your mind at will? It might sound like a recipe for terminal drug addiction, because it would be possible to give yourself pleasure or satisfaction without doing anything to earn it. However, this need not  necessarily be the case, because you could edit out your desire to choose this course of action if that's what you felt like doing, or even create a desire to edit out the desire (a second level desire). There is also the fact that you could as easily assign positive feelings to some project you consider intrinsically worthwhile as to idleness, so why choose idleness, or anything else you would feel guilty about? Perhaps psychopaths would choose to remain psychopaths, but most people would choose to strengthen what they consider ideal moral behaviour, since it would be possible to get their guilty pleasures more easily.


--
Stathis Papaioannou

Bruno Marchal

unread,
Jun 5, 2007, 10:12:09 AM6/5/07
to everyth...@googlegroups.com

Le 03-juin-07, à 21:52, Hal Finney a écrit :

>
> Part of what I wanted to get at in my thought experiment is the
> bafflement and confusion an AI should feel when exposed to human ideas
> about consciousness. Various people here have proffered their own
> ideas, and we might assume that the AI would read these suggestions,
> along with many other ideas that contradict the ones offered here.
> It seems hard to escape the conclusion that the only logical response
> is for the AI to figuratively throw up its hands and say that it is
> impossible to know if it is conscious, because even humans cannot agree
> on what consciousness is.


Augustin said about (subjective) *time* that he knows perfectly what it
is, but that if you ask him to say what it is, then he admits being
unable to say anything. I think that this applies to "consciousness".
We know what it is, although only in some personal and uncommunicable
way.
Now this happens to be true also for many mathematical concept.
Strictly speaking we don't know how to define the natural numbers, and
we know today that indeed we cannot define them in a communicable way,
that is without assuming the auditor knows already what they are.

So what can we do. We can do what mathematicians do all the time. We
can abandon the very idea of *defining* what consciousness is, and try
instead to focus on principles or statements about which we can agree
that they apply to consciousness. Then we can search for (mathematical)
object obeying to such or similar principles. This can be made easier
by admitting some theory or realm for consciousness like the idea that
consciousness could apply to *some* machine or to some *computational
events" etc.

We could agree for example that:
1) each one of us know what consciousness is, but nobody can prove
he/she/it is conscious.
2) consciousness is related to inner personal or self-referential
modality
etc.

This is how I proceed in "Conscience et Mécanisme". ("conscience" is
the french for consciousness, "conscience morale" is the french for the
english "conscience").

>
> In particular I don't think an AI could be expected to claim that it
> knows that it is conscious, that consciousness is a deep and intrinsic
> part of itself, that whatever else it might be mistaken about it could
> not be mistaken about being conscious. I don't see any logical way it
> could reach this conclusion by studying the corpus of writings on the
> topic. If anyone disagrees, I'd like to hear how it could happen.

As far as a machine is correct, when she introspects herself, she
cannot not discover a gap between truth (p) and provability (Bp). The
machine can discover correctly (but not necessarily in a completely
communicable way) a gap between provability (which can potentially
leads to falsities, despite correctness) and the incorrigible
knowability or knowledgeability (Bp & p), and then the gap between
those notions and observability (Bp & Dp) and sensibility (Bp & Dp &
p). Even without using the conventional name of "consciousness",
machines can discover semantical fixpoint playing the role of non
expressible but true statements.
We can *already* talk with machine about those true unnameable things,
as have done Tarski, Godel, Lob, Solovay, Boolos, Goldblatt, etc.


>
> And the corollary to this is that perhaps humans also cannot
> legitimately
> make such claims, since logically their position is not so different
> from that of the AI. In that case the seemingly axiomatic question of
> whether we are conscious may after all be something that we could be
> mistaken about.


This is an inference from "I cannot express p" to "I can express not
p". Or from ~Bp to B~p. Many atheist reason like that about the
concept of "unameable" reality, but it is a logical error.
Even for someone who is not willing to take the comp hyp into
consideration, it is a third person communicable fact that
self-observing machines can discover and talk about many non 3-provable
and sometimes even non 3-definable true "statements" about them. Some
true statements can only be interrogated.
Personally I don' think we can be *personally* mistaken about our own
consciousness even if we can be mistaken about anything that
consciousness could be about.

Bruno


http://iridia.ulb.ac.be/~marchal/

Brent Meeker

unread,
Jun 5, 2007, 1:41:46 PM6/5/07
to everyth...@googlegroups.com
Stathis Papaioannou wrote:
>
>
> On 05/06/07, *marc....@gmail.com <mailto:marc....@gmail.com>*
> <marc....@gmail.com <mailto:marc....@gmail.com>> wrote:
>
> Self-improvement requires more than just extra hardware. It also
> requires the ability to integrate new knowledge with an existing
> knowledge base in order to create truly orginal (novel) knowledge.
> But this appears to be precisely the definition of reflective
> intelligence! Thus, it seems that a system missing reflective
> intelligence simply cannot improve itself in an ordered way. To
> improve, a current goal structure has to be 'extrapolated' into a new
> novel goal structure which none-the-less does not conflict with the
> spirit of the old goal structure. But nothing but a *reflective*
> intelligence can possibly make an accurate assessment of whether a new
> goal structure is compatible with the old version! This stems from
> the fact that comparison of goal structures requires a *subjective*
> value judgement and it appears that only a *sentient* system can make
> this judgement (since as far as we know, ethics/morality is not
> objective). This proves that only a *sentient* system (a *reflective
> intelligence*) can possibly maintain a stable goal structure under
> recursive self-improvement.
>
>
> Why would you need to change the goal structure in order to improve
> yourself?

Even more problematic: How would you know the change was an improvement? An improvement relative to which goals, the old or the new?

Brent Meeker

Tom Caylor

unread,
Jun 5, 2007, 2:23:26 PM6/5/07
to Everything List

You guys are hopeless. ;)

Tom

Tom Caylor

unread,
Jun 5, 2007, 2:41:20 PM6/5/07
to Everything List
On Jun 5, 7:12 am, Bruno Marchal <marc...@ulb.ac.be> wrote:
> Le 03-juin-07, à 21:52, Hal Finney a écrit :
>
>
>
> > Part of what I wanted to get at in my thought experiment is the
> > bafflement and confusion an AI should feel when exposed to human ideas
> > about consciousness. Various people here have proffered their own
> > ideas, and we might assume that the AI would read these suggestions,
> > along with many other ideas that contradict the ones offered here.
> > It seems hard to escape the conclusion that the only logical response
> > is for the AI to figuratively throw up its hands and say that it is
> > impossible to know if it is conscious, because even humans cannot agree
> > on what consciousness is.
>
> Augustin said about (subjective) *time* that he knows perfectly what it
> is, but that if you ask him to say what it is, then he admits being
> unable to say anything. I think that this applies to "consciousness".
> We know what it is, although only in some personal and uncommunicable
> way.
> Now this happens to be true also for many mathematical concept.
> Strictly speaking we don't know how to define the natural numbers, and
> we know today that indeed we cannot define them in a communicable way,
> that is without assuming the auditor knows already what they are.
>

I fully agree. By the way, regarding time, I've wanted to post
something in the past regarding the the ancient Hebrew concept of time
which is dependent on persons (captured by the ancient Greek word
kairos, as opposed to the communicable chronos), but that's another
topic.

> So what can we do. We can do what mathematicians do all the time. We
> can abandon the very idea of *defining* what consciousness is, and try
> instead to focus on principles or statements about which we can agree
> that they apply to consciousness. Then we can search for (mathematical)
> object obeying to such or similar principles. This can be made easier
> by admitting some theory or realm for consciousness like the idea that
> consciousness could apply to *some* machine or to some *computational
> events" etc.
>

Actually, this approach is the same as in searching/discovering God.
I think that it is the same for any fundamental/ultimate truth. This
process of *recognition* is what happens when we would recognize that
a computer (or human) has consciousness by what it is saying. It is
not a 100% mathematical proof, by logical inference (that would not be
truth, but only consistency). It is a recognition of the kind of real
truth that we believe is there and for which we are searching on this
List.

Tom

Mark Peaty

unread,
Jun 5, 2007, 2:48:42 PM6/5/07
to everyth...@googlegroups.com
MG:
'... the generation of
> feelings which represent accurate tokens about motivational
> automatically leads to ethical behaviour.'

I have my doubts about this.
I think it is safer to say that reflective intelligence and the
ability to accurately perceive and identify with the emotions of
others are prerequisites for ethical behaviour. Truly ethical
behaviour requires a choice be made by the person making the
decision and acting upon it. Ethical behaviour is never truly
'automatic'. The inclination towards making ethical decisions
rather than simply ignoring the potential for harm inherent in
all our actions can become a habit; by dint of constantly
considering whether what we do is right and wrong [which itself
entails a decision each time], we condition ourselves to
approach all situations from this angle. Making the decision has
to be a conscious effort though. Anything else is automatism:
correct but unconscious programmed responses which probably have
good outcomes.

From my [virtual] soap-box I like to point out that compassion,
democracy, ethics and scientific method [which I hold to be
prerequisites for the survival of civilisation] all require
conscious decision making. You can't really do any of them
automatically, but constant consideration and practice in each
type of situation increases the likelihood of making the best
decision and at the right time.

With regard to psychopaths, my understanding is that the key
problem is complete lack of empathy. This means they can know
*about* the sufferings of others as an intellectual exercise but
they can never experience the suffering of others; they cannot
identify *with* that suffering. It seems to me this means that
psychopaths can never experience solidarity or true rapport with
others.

Regards

Mark Peaty CDES

mpe...@arach.net.au

http://www.arach.net.au/~mpeaty/


marc....@gmail.com wrote:
>
>
> On Jun 3, 9:20 pm, "Stathis Papaioannou" <stath...@gmail.com> wrote:
>> On 03/06/07, marc.ged...@gmail.com <marc.ged...@gmail.com> wrote:
>>
>> The third type of conscious mentioned above is synonymous with
>>
>>> 'reflective intelligence'. That is, any system successfully engaged
>>> in reflective decision theory would automatically be conscious.
>>> Incidentally, such a system would also be 'friendly' (ethical)
>>> automatically. The ability to reason effectively about ones own
>>> cognitive processes would certainly enable the ability to elaborate
>>> precise definitions of consciousness and determine that the system was
>>> indeed conforming to the aforementioned definitions.
>> How do you derive (a) ethics and (b) human-friendly ethics from reflective
>> intelligence? I don't see why an AI should decide to destroy the world,
>> save the world, or do anything at all to the world, unless it started off
>> with axioms and goals which pushed it in a particular direction.
>>
>> --
>> Stathis Papaioannou
>
> When reflective intelligence is applied to cognitive systems which
> reason about teleological concepts (which include values, motivations
> etc) the result is conscious 'feelings'. Reflective intelligence,
> recall, is the ability to correctly reason about cognitive systems.
> When applied to cognitive systems reasoning about teleological
> concepts this means the ability to correctly determine the
> motivational 'states' of self and others - as mentioned - doing this
> rapidly and accuracy generates 'feelings'. Since, as has been known
> since Hume, feelings are what ground ethics, the generation of
> feelings which represent accurate tokens about motivational
> automatically leads to ethical behaviour.
>
> Bad behaviour in humans is due to a deficit in reflective
> intelligence. It is known for instance, that psychopaths have great
> difficulty perceiving fear and sadness and negative motivational
> states in general. Correct representation of motivational states is
> correlated with ethical behaviour. Thus it appears that reflective

marc....@gmail.com

unread,
Jun 5, 2007, 11:08:35 PM6/5/07
to Everything List

On Jun 5, 10:20 pm, "Stathis Papaioannou" <stath...@gmail.com> wrote:

>
> Why would you need to change the goal structure in order to improve
> yourself?

Improving yourself requires the ability to make more effective
decisions (ie take decisions which which move you toward goals more
efficiently). This at least involves the elaboration (or extension,
or more accurate definition of) goals, even with a fixed top level
structure.

> Evolution could be described as a perpetuation of the basic
> program, "survive", and this has maintained its coherence as the top level
> axiom of all biological systems over billions of years. Evolution thus seems
> to easily, and without reflection, make sure that the goals of the new and
> more complex system are consistent with the primary goal. It is perhaps only
> humans who have been able to clearly see the primary goal for what it is,
> but even this knowledge does not make it any easier to overthrow it, or even
> to desire to overthrow it.


Evolution does not have a 'top level goal'. Unlike a reflective
intelligence, there is no centralized area in the bio-sphere enforcing
a unified goal structure on the system as the whole. Change is local
- the parts of the system (the bio-sphere) can only react to other
parts of the system in their local area. Furthermore, the system as a
whole is *not* growing more complex, only the maximum complexity
represented in some local area is. People constantly point to
'Evolution' as a good example of a non-conscious intelligence but it's
important to emphasize that it's an 'intelligence' which is severely
limited.

Stathis Papaioannou

unread,
Jun 6, 2007, 6:01:32 AM6/6/07
to everyth...@googlegroups.com


On 06/06/07, marc....@gmail.com <marc....@gmail.com> wrote:

> Evolution could be described as a perpetuation of the basic
> program, "survive", and this has maintained its coherence as the top level
> axiom of all biological systems over billions of years. Evolution thus seems
> to easily, and without reflection, make sure that the goals of the new and
> more complex system are consistent with the primary goal. It is perhaps only
> humans who have been able to clearly see the primary goal for what it is,
> but even this knowledge does not make it any easier to overthrow it, or even
> to desire to overthrow it.


Evolution does not have a 'top level goal'.  Unlike a reflective
intelligence, there is no centralized area in the bio-sphere enforcing
a unified goal structure on the system as the whole.  Change is local
- the parts of the system (the bio-sphere) can only react to other
parts of the system in their local area.  Furthermore, the system as a
whole is *not* growing more complex, only the maximum complexity
represented in some local area is.  People constantly point to
'Evolution' as a good example of a non-conscious intelligence but it's
important to emphasize that it's an 'intelligence' which is severely
limited.

I was not arguing that evolution is intelligent (although I suppose it depends on how you define intelligence), but rather that non-intelligent agents can have goals. We are the descendants of single-celled organisms, and although we are more intelligent than they were, we have kept the same top level goals: survive, feed, reproduce. Our brain and body are so thoroughly the slaves of the first replicators that even if we realise this we are unwilling, despite all our intelligence, to do anything about it.


--
Stathis Papaioannou

marc....@gmail.com

unread,
Jun 6, 2007, 11:10:42 PM6/6/07
to Everything List
On Jun 6, 10:01 pm, "Stathis Papaioannou" <stath...@gmail.com> wrote:

>
> I was not arguing that evolution is intelligent (although I suppose it
> depends on how you define intelligence), but rather that non-intelligent
> agents can have goals.

Well, actually I'd say that evolution does have a *limited*
intelligence. OK I agree that the system 'Evolution' has goals. But
according to my definition anything with a goal has some kind of
intelligence. This is only a quibble over deifnitions though, since
I'm now agreeing with you that 'systems' in general can have goals.
Any thing you call an 'Agent' has to have a goal almost my definition
in my view.

>We are the descendants of single-celled organisms,
> and although we are more intelligent than they were, we have kept the same
> top level goals: survive, feed, reproduce. Our brain and body are so
> thoroughly the slaves of the first replicators that even if we realise this
> we are unwilling, despite all our intelligence, to do anything about it.

Nope. You are confusing the goal of evolutions ('survive, feed,
reproduce') with human goals. Our goals as individuals are not the
goals of evolution. Evolution explains *why* we have the preferences
we do, but this does not mean that our goals are the goals of our
genes. (If they were, we would spend all our time donating to sperm
banks which would maximize the goals of evolution).

Stathis Papaioannou

unread,
Jun 6, 2007, 11:54:35 PM6/6/07
to everyth...@googlegroups.com


On 07/06/07, marc....@gmail.com <marc....@gmail.com> wrote:

Nope.  You are confusing the goal of evolutions ('survive, feed,
reproduce') with human goals.  Our goals as individuals are not the
goals of evolution.  Evolution explains *why* we have the preferences
we do, but this does not mean that our goals are the goals of our
genes.  (If they were, we would spend all our time donating to sperm
banks which would maximize the goals of evolution).

Evolution has not had a chance to take into account modern reproductive technologies, so we can easily defeat the goal "reproduce", and see the goal "feed" as only a means to the higher level goal "survive". However, *that* goal is very difficult to shake off. We take survival as somehow profoundly and self-evidently important, which it is, but only because we've been programmed that way (ancestors that weren't would not have been ancestors). Sometimes people become depressed and no longer wish to survive, but that's an example of neurological malfunction. Sometimes people "rationally" give up their own survival for the greater good, but that's just an example of interpreting the goal so that it has greater scope, not overthrowing it.


--
Stathis Papaioannou

marc....@gmail.com

unread,
Jun 7, 2007, 3:20:43 AM6/7/07
to Everything List

On Jun 7, 3:54 pm, "Stathis Papaioannou" <stath...@gmail.com> wrote:

>
> Evolution has not had a chance to take into account modern reproductive
> technologies, so we can easily defeat the goal "reproduce", and see the goal
> "feed" as only a means to the higher level goal "survive". However, *that*
> goal is very difficult to shake off. We take survival as somehow profoundly
> and self-evidently important, which it is, but only because we've been
> programmed that way (ancestors that weren't would not have been ancestors).
> Sometimes people become depressed and no longer wish to survive, but that's
> an example of neurological malfunction. Sometimes people "rationally" give
> up their own survival for the greater good, but that's just an example of
> interpreting the goal so that it has greater scope, not overthrowing it.
>
> --
> Stathis Papaioannou

Evolution doesn't care about the survival of individual organisms
directly, the actual goal of evolution is only to maximize
reproductive fitness.

If you want to eat a peice of chocolate cake, evolution explains why
you like the taste, but your goals are not evolutions goals. You
(Stathis) want to the cake because it tastes nice - *your* goal is to
experience the nice taste. Evolution's goal (maximize reproductive
fitness) is quite different. Our (human) goals are not evolution's
goals.

Cheers.

Quentin Anciaux

unread,
Jun 7, 2007, 3:50:34 AM6/7/07
to everyth...@googlegroups.com
Hi,

2007/6/7, marc....@gmail.com <marc....@gmail.com>:

I have to disagree, if human goals were not tied to evolution goals
then human should not have proliferated.

Quentin

marc....@gmail.com

unread,
Jun 7, 2007, 4:03:08 AM6/7/07
to Everything List

On Jun 7, 7:50 pm, "Quentin Anciaux" <allco...@gmail.com> wrote:

>
> I have to disagree, if human goals were not tied to evolution goals
> then human should not have proliferated.
>

> Quentin- Hide quoted text -
>

Well of course human goals are *tied to* evolution's goals, but that
doesn't mean they're the same. In the course of pursuit of our own
goals we sometimes achieve evolution's goals. But this is
incidental. As I said, evolution explains why we feel and experience
things the way we do but our goals are not evolutions goals. You
don't eat food to maximize reproductive fitness, you eat food because
you like the taste.

This point was carefully explained by Steven Pinker in his books (yes
he agrees with me).

Stathis Papaioannou

unread,
Jun 7, 2007, 5:19:55 AM6/7/07
to everyth...@googlegroups.com
On 07/06/07, marc....@gmail.com <marc....@gmail.com> wrote:

Evolution doesn't care about the survival of individual organisms
directly, the actual goal of evolution is only to maximize
reproductive fitness.

If you want to eat a peice of chocolate cake, evolution explains why
you like the taste, but your goals are not evolutions goals.  You
(Stathis) want to the cake because it tastes nice - *your* goal is to
experience the nice taste.  Evolution's goal (maximize reproductive
fitness) is quite different.   Our (human) goals are not evolution's
goals.

That's right, but we can see through evolution's tricks with the chocolate cake and perhaps agree that it would be best not to eat it. This involves reasoning about subgoals in view of the top level goal, something that probably only humans among the animals are capable of doing. However, the top level goal is not something that we generally want to change, no matter how insightful and intelligent we are. And I do think that this top level goal must have been programmed into us directly as fear of death, because it does not arise logically from the desire to avoid painful and anxiety-provoking situations, which is how fear of death is indirectly coded in animals.



--
Stathis Papaioannou

Brent Meeker

unread,
Jun 7, 2007, 12:52:11 PM6/7/07
to everyth...@googlegroups.com

"Tied to" is pretty loose. Most individuals goals are "tied to" evolution (I wouldn't say that evolution has goals except in a metaphorical sense), but it may be a long and tangled thread. I like to eat sweets because sugar is a high energy food and so a taste for sugar was favored by natural selection.

But my fitness and the fitness of the human species are not the same thing. I have type II diabetes and so a taste for sugar is bad for me and my survival. But natural selection cares nothing for that; I've already sired as many children as I ever will.

The individual goal of living forever is at odds with evolutionary fitness - if you're not going to have any more children you're just a waste of resources as far as natural selection is concerned.

Brent Meeker

Brent Meeker

unread,
Jun 7, 2007, 12:57:41 PM6/7/07
to everyth...@googlegroups.com
Stathis Papaioannou wrote:
>
>
> On 07/06/07, *marc....@gmail.com <mailto:marc....@gmail.com>*

The top level goal implied by evolution would be to have as many children as you can raise through puberty. Avoiding death should only be a subgoal.

Brent Meeker

Johnathan Corgan

unread,
Jun 7, 2007, 1:19:11 PM6/7/07
to everyth...@googlegroups.com
Brent Meeker wrote:

> The top level goal implied by evolution would be to have as many
> children as you can raise through puberty. Avoiding death should
> only be a subgoal.

It should go a little further than puberty--the accumulated wisdom of
grandparents may significantly enhance the survival chances of their
grandchildren, more so than the decrease in available resources in the
environment they might consume.

So I agree that once you have sired all the children you ever will, it
makes sense from an evolutionary perspective to "get out of the
way"--that is, stop competing with them for resources. But the timing
of your exit is probably more optimal somewhat after they have their own
children, if you can help them to get a good start.

I do wonder if evolutionary fitness is more accurately measured by the
number of grandchildren one has than by the number of children. Aside
from the "assistance" line of reasoning above, in order to propagate,
one must be able to have children that are capable of having children
themselves.

Johnathan Corgan

Stathis Papaioannou

unread,
Jun 7, 2007, 7:15:30 PM6/7/07
to everyth...@googlegroups.com


On 08/06/07, Brent Meeker <meek...@dslextreme.com> wrote:

The top level goal implied by evolution would be to have as many children as you can raise through puberty.  Avoiding death should only be a subgoal.

Yes, but evolution doesn't have an overseeing intelligence which figures these things out, and it does seem that as a matter of fact most people would prefer to avoid reproducing if it's definitely going to kill them, at least when they aren't intoxicated. So although reproduction trumps survival as a goal for evolution, for individual humans it's the other way around. This just confirms that there is no accounting for values or goals rationally. What we have is what we're stuck with.


--
Stathis Papaioannou

Mark Peaty

unread,
Jun 8, 2007, 2:49:19 AM6/8/07
to everyth...@googlegroups.com
<as hominem = With, em, respect, I have to say that this thread
has not made a lot of sense.>

SP:
'This just confirms that there is no accounting for values or
> goals rationally.'

MP: In other words _Evolution does not have goals._
Evolution is a conceptual framework we use to make sense of the
world we see, and it's a bl*ody good one, by and large. But
evolution in the sense of the changes we can point to as
occurring in the forms of living things, well it all just
happens; just like the flowing of water down hill.

You will gain more traction by looking at what it is that
actually endures and changes over time: on the one hand genes of
DNA and on the other hand memes embodied in behaviour patterns,
the brain structures which mediate them, and the environmental
changes [glyphs, paintings, structures, etc,] which stimulate
and guide them.


Regards

Mark Peaty CDES

mpe...@arach.net.au

http://www.arach.net.au/~mpeaty/

Stathis Papaioannou wrote:
>
>
> On 08/06/07, *Brent Meeker* <meek...@dslextreme.com

David Nyman

unread,
Jun 20, 2007, 7:07:47 PM6/20/07
to Everything List
On Jun 5, 3:12 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:

> Personally I don' think we can be *personally* mistaken about our own
> consciousness even if we can be mistaken about anything that
> consciousness could be about.

I agree with this, but I would prefer to stop using the term
'consciousness' at all. To make a decision (to whatever degree of
certainty) about whether a machine possessed a 1-person pov analogous
to a human one, we would surely ask it the same sort of questions one
would ask a human. That is: questions about its personal 'world' -
what it sees, hears, tastes (and perhaps extended non-human
modalitiies); what its intentions are, and how it carries them into
practice. From the machine's point-of-view, we would expect it to
report such features of its personal world as being immediately
present (as ours are), and that it be 'blind' to whatever 'rendering
mechanisms' may underlie this (as we are).

If it passed these tests, it would be making similar claims on a
personal world as we do, and deploying this to achieve similar ends.
Since in this case it could ask itself the same questions that we can,
it would have the same grounds for reaching the same conclusion.

However, I've argued in the other bit of this thread against the
possibility of a computer in practice being able to instantiate such a
1-person world merely in virtue of 'soft' behaviour (i.e.
programming). I suppose I would therefore have to conclude that no
machine could actually pass the tests I describe above - whether self-
administered or not - purely in virtue of running some AI program,
however complex. This is an empirical prediction, and will have to
await an empirical outcome.

David

Colin Hales

unread,
Jun 20, 2007, 10:45:43 PM6/20/07
to everyth...@googlegroups.com
down a waaaays......
===========================================
Russell Standish wrote:
> On Sun, Jun 17, 2007 at 03:47:19PM +1000, Colin Hales wrote:
>> Hi,
>>
>> RUSSEL
>>> All I can say is that I don't understand your distinction. You have
>> introduced a new term "necessary primitive" - what on earth is that? But
>> I'll let this pass, it probably isn't important.
>>
>> COLIN
>> Oh no you don't!! It matters. Bigtime...
>>
>> Take away the necessary primitive: no 'qualititative novelty'
>> Take away the water molecules: No lake.
>> Take away the bricks, no building
>> Take away the atoms: no molecules
>> Take away the cells: no human
>> Take away the humans: no humanity
>> Take away the planets: no solar system
>> Take away the X: No emergent Y
>> Take away the QUALE: No qualia
>>
>> Magical emergence is when but claim Y exists but you can't
>> identify an X. Such as:
>
>
> OK, so by necessary primitive, you mean the syntactic or microscopic
> layer. But take this away, and you no longer have emergence. See
> endless discussions on emergence - my paper, or Jochen Fromm's book for
> instance. Does this mean "magical emergence" is oxymoronic?

I do not think I mean what you suggest. To make it almost tediously
obvious I could rephrase it " NECESSARY PRIMITIVE ORGANISATIONAL LAYER.
Necessary in that if you take it away the 'emergent' is gone.PRIMITIVE
ORGANISATIONAL LAYER = one of the layers of the hierarchy of the natural
world (from strings to atoms to cells and beyond): real observable
-on-the-benchtop-in-the-lab - layers..... Not some arm waving "syntactic"
or "information" or "complexity" or "Computaton" or "function_atom" or
"representon". Magical emergence is real, specious and exactly what I have
said all along:

You claim consciousness arises as a result of ["syntactic" or
"information" or "complexity" or "Computational" or "function_atom"] =
necessary primitive, but it has no scientifically verifiable correlation
with any real natural world phenomenon that you can stand next to and have
your picture taken.

>
>
>> You can't use an object derived using the contents of
>> consciousness(observation) to explain why there are any contents of
>> consciousness(observation) at all. It is illogical. (see the wigner quote
>> below). I find the general failure to recognise this brute reality very
>> exasperating.
>>
>
> People used to think that about life. How can you construct (eg an
> animal) without having a complete discription of that animal. So how
> can an animal self-reproduce without having a complete description of
> itself. But this then leads to an infinite regress.
>
> The solution to this conundrum was found in the early 20th century -
> first with such theoretical constructs as combinators and lambda
> calculus, then later the actual genetic machinery of life. If it is
> possible in the case of self-reproduction, the it will also likely to
> be possible in the case of self-awareness and consciousness. Stating
> this to illogical doesn't help. That's what people from the time of
> Descartes thought about self-reproduction.
>
>> COLIN
>> <snip>
>>> So this means that in a computer abstraction.
>>>> d(KNOWLEDGE(t))
>>>> --------------- is already part of KNOWLEDGE(t)
>>>> dt
>> RUSSEL
>>> No its not. dK/dt is generated by the interaction of the rules with the
>> environment.
>>
>> No. No. No. There is the old assumption thing again.
>>
>> How, exactly, are you assuming that the agent 'interacts' with the
>> environment? This is the world external to the agent, yes?. Do not say
>> "through sensory measurement", because that will not do. There are an
>> infinite number of universes that could give rise to the same sensory
>> measurements.
>
> All true, but how does that differ in the case of humans?

The extreme uniqueness of the circumstance alone....We ARE the thing we
describe. We are more entitled to any such claims .....notwithstanding
that...

Because, as I have said over and over... and will say again: We must live
in the kind of universe that delivers or allows access to, in ways as yet
unexplained, some aspects of the distal world, so which sensory I/O can be
attached, and thus conjoined, be used to form the qualia
representation/fields we experience in our heads.

Forget about HOW....that this is necessarily the case is unavoidable.
Maxwell's equations prove it QED - style...Without it, the sensory I/O
(ultimately 100% electromagnetic phenomena) could never resolve the distal
world in any unambiguous way. Such disambiguation physically
happens.....such qualia representations exist, hence brains must have
direct access to the distal world. QED.

>
>> We are elctromagnetic objects. Basic EM theory. Proven
>> mathematical theorems. The solutions are not unique for an isolated
>> system.
>>
>> Circularity.Circularity.Circularity.
>>
>> There is _no interaction with the environment_ except for that provided by
>> the qualia as an 'as-if' proxy for the environment. The origins of an
>> ability to access the distal external world in support of such a proxy is
>> mysterious but moot. It can and does happen, and that ability must come
>> about because we live in the kind of universe that supports that
>> possibility. The mysteriousness of it is OUR problem.
>>
>
> You've lost me completely here.

Here you are trying to say that an explanation of consciousness lies "in
that direction" (magical emergence flavour X"), ........when you appear to
never have fully intraspected and explored the brute reality of what your
own neurons deliver to you moment to moment...see the above para.... you
therefore must harbour some sort of as yet undisclosed (even to
yourself...?) metaphysics.

I'll try again.....We necessarily live in a universe that supports the
existence of the internal life qualia delivers...yes? The 'contents'
delivered by this radically sophisticated "set of experienced
reality-metaphors" cannot be literally what the universe is made of.... is
indisputable. Physiological and plain_old_logical evidence seems
blindingly clear to me. Whatever the reality-metaphors are made of, it is
the same as what everything else is made of.....OK... some evidence worth
considering.......I quote the SCIENCE mag 2005 "125 Questions....."
article yet again......Top question: #1, from the cosmologists.

"WHAT IS THE UNIVERSE MADE OF?"

Look at the question from a META-LEVEL standpoint..... It means

"We currently do not know what the universe is made of"
"The universe is not made of anything in any of the standard particle
model. It's not made of electrons, protons, neurons or any'thing' else.
......All these things are made of something and we do NOT KNOW what that
is".
The universe is NOT MADE OF ATOMS or their constituents"... nor photons
...not quarks...NONE of it.

And of course you must take on board the blizzard of critical argument
that led the entire scientific community as a group in the world's
preeminent science journal to make such a statement... and that there are
good reaosns EMPIRICAL reasosns why that stateent can be made....

............. we then are forced to entertain that the universe is MADE OF
SOMETHING and that something is not any of the things that QUALIA have
ever delivered to us as observations....(QUALIA are the ultimate source of
all scientific evidence used to construct all the empirically verified
depictions of the natural world we have BAR NONE.)....

BUT.....

At the same time we can plausibly and defensibly justify the claim that
whatever the universe is really made of , QUALIA are made of it too, and
that the qualia process and the rest of the process (that appear like
atoms etc in the qualia....are all of the same KIND or CLASS of natural
phenomenon...a perfectly natural phenomenon innate to whatever it is that
it is actually made of.

That is what I mean by "we must live in the kind of universe....." and I
mean 'must' in the sense of formal necessitation of the most stringent
kind.

cheers,

colin

Russell Standish

unread,
Jun 21, 2007, 3:08:03 AM6/21/07
to everyth...@googlegroups.com
On Thu, Jun 21, 2007 at 12:45:43PM +1000, Colin Hales wrote:
> >
> > OK, so by necessary primitive, you mean the syntactic or microscopic
> > layer. But take this away, and you no longer have emergence. See
> > endless discussions on emergence - my paper, or Jochen Fromm's book for
> > instance. Does this mean "magical emergence" is oxymoronic?
>
> I do not think I mean what you suggest. To make it almost tediously
> obvious I could rephrase it " NECESSARY PRIMITIVE ORGANISATIONAL LAYER.
> Necessary in that if you take it away the 'emergent' is gone.PRIMITIVE
> ORGANISATIONAL LAYER = one of the layers of the hierarchy of the natural
> world (from strings to atoms to cells and beyond): real observable
> -on-the-benchtop-in-the-lab - layers.....

Still sounds like the syntactic layer to me.

> Not some arm waving "syntactic"
> or "information" or "complexity" or "Computaton" or "function_atom" or
> "representon". Magical emergence is real, specious and exactly what I have
> said all along:
>

real and specious?

> You claim consciousness arises as a result of ["syntactic" or
> "information" or "complexity" or "Computational" or "function_atom"] =
> necessary primitive, but it has no scientifically verifiable correlation
> with any real natural world phenomenon that you can stand next to and have
> your picture taken.
>

The only form of consciousness known to us is emergent relative to a
syntactic of neurons, which you most certainly can take pictures
of. I'm not sure what your point is here.

What are you talking about here? Self-awareness? We started off talking
about whether machines doing science was evidence that they're conscious.

> >
> > You've lost me completely here.
>
> Here you are trying to say that an explanation of consciousness lies "in
> that direction" (magical emergence flavour X"), ........when you appear to

You're the one introducing the term magical emergence, for which I've
not obtained an adequate definitions from you.

...

>
> At the same time we can plausibly and defensibly justify the claim that
> whatever the universe is really made of , QUALIA are made of it too, and
> that the qualia process and the rest of the process (that appear like
> atoms etc in the qualia....are all of the same KIND or CLASS of natural
> phenomenon...a perfectly natural phenomenon innate to whatever it is that
> it is actually made of.
>
> That is what I mean by "we must live in the kind of universe....." and I
> mean 'must' in the sense of formal necessitation of the most stringent
> kind.
>
> cheers,
>
> colin
>

I'm still confused about what you're trying to say. Are you saying our
qualia are made up of electrons and quarks, or if not them, then
whatever they're made of (strings perhaps?)

How could you imagine the colour green being made up of this stuff, or
the wetness of water?

John Mikes

unread,
Jun 22, 2007, 10:22:13 PM6/22/07
to everyth...@googlegroups.com
Dear David.
do not expect from me the theoretical level of technicality-talk er get
from Bruno: I talk (and think) common sense (my own) and if the
theoretical technicalities sound strange, I return to my thinking.

That's what I got, that's what I use (plagiarized from the Hungarian commi
joke: what is the difference between the peoples' democracy and a wife?
Nothing: that's what we got that's what we love)

When I read your "questioning" the computer, i realized that you are
in the ballpark of the AI people (maybe also AL - sorry, Russell)
who select machine-accessible aspects for comparing.
You may ask about prejudice, shame (about goofed situations),  humor (does a
computer laugh?)  boredom or preferential topics (you push for an astronomical calculation and the computer says: I rather play some Bach music now)
Sexual preference (even disinterestedness is slanted), or laziness.
If you add untruthfulness in risky situations, you really have a human machine
with consciousness (whatever people say it is - I agree with your evading
that unidentified obsolete noumenon as much as possible).

I found Bruno's post well fitting - if i have some hint what
"...inner personal or self-referential modality..." may mean.
I could not 'practicalize' it.
I still frown when "abondoning (the meaning of) something but consider
 items as pertaining to it" - a rough paraphrasing, I admit.  To what?.
I don't feel comfortable to borrow math-methods for nonmath explanations
but that is my deficiency.

Now that we arrived at thequestion I replied-added (sort of) to Colin's question I -
let me ask it again: how would YOU know if you are conscious?
(Conscious is more meaningful than cc-ness). Or rather: How would
you know if you are NOT conscious? Well, you wouldn't. If you can,
you are conscious.  Computers?????

Have a good weekend

John Mikes

David Nyman

unread,
Jun 23, 2007, 9:07:28 AM6/23/07
to everyth...@googlegroups.com
Hi John

JM: You may ask about prejudice, shame (about goofed situations),  humor (does a
computer laugh?)  boredom or preferential topics (you push for an astronomical calculation and the computer says: I rather play some Bach music now)
Sexual preference (even disinterestedness is slanted), or laziness.
If you add untruthfulness in risky situations, you really have a human machine
with consciousness

DN: All good, earthy, human questions.  I guess my (not very exhaustive) examples were motivated by some general notion of a 'personal world' without this necessarily being fully human.  A bit like 'Commander Data', perhaps.

JM: Now that we arrived at the question I replied-added (sort of) to Colin's question I -
let me ask it again: how would YOU know if you are conscious?

DN: Since we agree to eliminate the 'obsolete noumenon', we can perhaps re-phrase this as just: 'how do you know x?'  And then the answers are of the type 'I just see x, hear x, feel x' and so forth.  IOW, 'knowing x' is unmediated - 'objects' like x are just 'embedded' in the structure of the 'knower', and this is recursively related to more inclusive structures within which the knower and its environment are in turn embedded.

JM: Or rather: How would you know if you are NOT conscious? Well, you wouldn't.

DN: Agreed.  If we 'delete the noumenon' we get: "How would you know if you are NOT?" or: "How would you know if you did NOT (know)?".  To which we might indeed respond: "You would not know, if you were NOT", or: "You would not know, if you did NOT (know)".

JM: If you can, you are conscious.

DN: Yes, If you know, then you know.

JM: Computers?????

DN: I think we need to distinguish between 'computers' and 'machines'.  I can see no reason in principle why an artefact could not 'know', and be motivated by such knowing to interact with the human world: humans are of course themselves 'natural artefacts'.  The question is whether a machine can achieve this purely in virtue of instantiating a 'Universal Turing Machine'. For me the key is 'interaction with the human world'.  It may be possible to conceive that some machine is computing a 'world' with 'knowers' embedded in an environment to which they respond appropriately based on what they 'know'.  However such a world is 'orthogonal' to the 'world' in which the machine that instantiates the program is itself embedded. IOW, no 'event' as conceived in the 'internal world' has any causal implication to any 'event' in the 'external world', or vice versa.

We can see this quite clearly in that an engineer could in principle give a reductive account of the entire causal sequence of the machine's internal function and interaction with the environment without making any reference whatsoever to the programming, or 'world', of the UTM.

Bruno's approach is to postulate the whole 'ball of wax' as computation, so that any 'event' whether 'inside' or 'outside' the machine is 'computed'.  The drift of my recent posts has been that even in this account, 'worlds' can emerge 'orthogonally' to each other, such that from their reciprocal perspectives, 'events' in their respective worlds will be 'imaginary'.  ISTM that this is the nub of the 'level of substitution' dilemma in the 'yes doctor' proposition: you may well 'save your soul' but 'lose the whole world'.  But of course Bruno knows all this (and much more) - he is at pains to show how computationalism and any 'primitive' concept of 'matter' are incompatible.  From my reading of 'Theory of Nothing' so does Russell, so I suspect that our recent wrangling is down to my lousy way of expressing myself.

A good weekend to you too!

David

Brent Meeker

unread,
Jun 23, 2007, 1:24:25 PM6/23/07
to everyth...@googlegroups.com

But he could also switch from an account in terms of the machine level causality to an account in terms of the computed 'world'. In fact he could switch back and forth. Causality in the computed 'world' would have it's corresponding causality in the machine and vice versa. So I don't see why they should be regarded as "orthogonal".

Brent Meeker

David Nyman

unread,
Jun 23, 2007, 1:50:41 PM6/23/07
to everyth...@googlegroups.com
On 23/06/07, Brent Meeker <meek...@dslextreme.com> wrote:

BM:  But he could also switch from an account in terms of the machine level causality to an account in terms of the computed 'world'.  In fact he could switch back and forth.  Causality in the computed 'world' would have it's corresponding causality in the machine and vice versa.  So I don't see why they should be regarded as "orthogonal".

DN:  Because the 'computational' description is arbitrary with respect to the behaviour of the hardware.  It's merely an imputation, one of an infinite set of such descriptions that could be imputed to the same hardware behaviour.

Brent Meeker

unread,
Jun 23, 2007, 2:19:26 PM6/23/07
to everyth...@googlegroups.com
David Nyman wrote:
> On 23/06/07, *Brent Meeker* <meek...@dslextreme.com
> <mailto:meek...@dslextreme.com>> wrote:
>
> BM: But he could also switch from an account in terms of the machine
> level causality to an account in terms of the computed 'world'. In fact
> he could switch back and forth. Causality in the computed 'world' would
> have it's corresponding causality in the machine and vice versa. So I
> don't see why they should be regarded as "orthogonal".
>
> DN: Because the 'computational' description is arbitrary with respect
> to the behaviour of the hardware. It's merely an imputation, one of an
> infinite set of such descriptions that could be imputed to the same
> hardware behaviour.

True. But whatever interpretation was placed on the hardware behavior it would still have the same causal relations in it as the hardware. Although there will be infinitely many possible interpretations, it's not the case that any description will do. Changing the description would be analogous to changing the reference frame or the names on a map. The two processes would still be parallel, not orthogonal.

Brent Meeker

David Nyman

unread,
Jun 23, 2007, 6:54:12 PM6/23/07
to everyth...@googlegroups.com
On 23/06/07, Brent Meeker <meek...@dslextreme.com> wrote:

BM: Changing the description would be analogous to changing the reference frame or the names on a map.

DN:  I agree.

BM:  The two processes would still be parallel, not orthogonal.

DN:  But the inference I draw from your points above is that there is only one process that has causal relevance to the world of the computer, and that is the hardware one.  It is 'distinguished' in virtue of emerging at the same level as the computer and the causal network in which it is embedded.  The world of the program is 'imaginary', or 'orthogonal', from this perspective - a ghost in the machine.  It is 'parallel' only in the mind of the programmer.

David

John Mikes

unread,
Jun 26, 2007, 3:39:45 PM6/26/07
to everyth...@googlegroups.com


On 6/23/07, David Nyman <david...@gmail.com> wrote:
Hi John....
....(just your Italics par-s quoted in this reply. Then "JM: means present text)):
 
DN: Since we agree to eliminate the 'obsolete noumenon', we can perhaps re-phrase this as just: 'how do you know x?'  And then the answers are of the type 'I just see x, hear x, feel x' and so forth.  IOW, 'knowing x' is unmediated - 'objects' like x are just 'embedded' in the structure of the 'knower', and this is recursively related to more inclusive structures within which the knower and its environment are in turn embedded.
JM:  You mean a hallucination of x, when you  'I just see x, hear x, feel x' and so forth'
is included in your knowledge? or even substitutes for it? Maybe yes...
But then can you differentiate? (or this is no reasonable question?)
*


((to JM: ...know if you are NOT conscious? Well, you wouldn't.))

DN: Agreed.  If we 'delete the noumenon' we get: "How would you know if you are NOT?" or: "How would you know if you did NOT (know)?".  To which we might indeed respond: "You would not know, if you were NOT", or: "You would not know, if you did NOT (know)".
JM:  The classic question: "Am I? and the classical answer:  "Who is asking?"
*

DN: I think we need to distinguish between 'computers' and 'machines'.  I can see no reason in principle why an artifact could not 'know', and be motivated by such knowing to interact with the human world: humans are of course themselves 'natural artifacts'.  itself embedded.
 
JM: Are you including 'humans' into the machines or the computers? And dogs? Amoebas?
The main difference I see here is the 'extract' of the "human world" (or: "world, as humans can interpret what they learned") downsized to our choice of necessity which WE liked to design into an artifact. (motors, cellphones, AI, AL). Yes, we (humans etc.) are artefacts but 'use' a lot of capabilities (mental etc. gadgets) we either don't know at all, or just accept them as 'being human' (or an extract of human traits as 'being dog') with no urge to build such into a microwave oven or an AI. 
But then we are SSOO smart when we draw conclusions!!!!! 
*
DN:

Bruno's approach is to postulate the whole 'ball of wax' as computation, so that any 'event' whether 'inside' or 'outside' the machine is 'computed'
JM:
Bruno is right: accepting that 'any machine' is part of its "outside(?) totality", i.e.  embedded into its ambiance, I would be scared to differentiate myself. There is no hermetic 'skin' - it is transitional effects transcending back and forth, we just do not observe those outside the 'topical boundaries' of our actual observation (model, as I call it).
 
DN:
The drift of my recent posts has been that even in this account, 'worlds' can emerge 'orthogonally' to each other, such that from their reciprocal perspectives, 'events' in their respective worlds will be 'imaginary'.  
JM:
I can't say: I have no idea how the world works, except for that little I interpreted into my 1st person narrative. I accept "maybe"-s.
And I have a way to 'express' myself: I use "I dunno".
 
Have fun
 
John

David Nyman

unread,
Jun 26, 2007, 5:10:49 PM6/26/07
to everyth...@googlegroups.com
On 26/06/07, John Mikes <jam...@gmail.com> wrote:

JM:  You mean a hallucination of x, when you  'I just see x, hear x, feel x' and so forth'
is included in your knowledge? or even substitutes for it? Maybe yes...

DN:  "I am conscious of knowing x" is distinguishable from "I know x".  The former has already differentiated 'knowing x' and so now "I know [knowing x]".  And so forth.  So knowing in this sense stands for a direct or unmediated 'self-relation', a species of unity between knower and known - hence its notorious 'incorrigibility'.

JM:  But then can you differentiate? (or this is no reasonable question?)

DN:  It seems that in the development of the individual at first there is no such differentiation; then we find that we are 'thrown' directly into a 'world' populated with 'things' and 'other persons'; later, we differentiate this from a distal 'real world' that putatively co-varies with it.  Now we are in a position to make a distinction between 'plural' or 'rational' modes of knowing, and solipsistic or 'crazy' ones.  But then it dawns that it's *our world* - not the 'real' one, that's the 'hallucination'.  No wonder we're crazy!  This evolutionarily-directed stance towards what we 'know' is of course so pervasive that it's only a minority (like the lost souls on this list!) who harbour any real concern about the precise status of such correlations.  Hence, I suppose, our continual state of confusion.

JM:  The classic question: "Am I? and the classical answer:  "Who is asking?"

DN:  Just so. Crazy, like I say.

JM: Are you including 'humans' into the machines or the computers? And dogs? Amoebas?

DN:  Actually, I just meant to distinguish between 'machines' considered physically and computational processes.  I really have no idea of course whether any non-human artefact will ever come to know and act in the sense that a human does. My point was only to express my logical doubts that it would ever do so in virtue of its behaving in a way that merely represents *to us* a process of computation.  However, the more I reason about this the stranger it gets, so I guess I really 'dunno'.
 
JM:  Bruno is right: accepting that 'any machine' is part of its "outside(?) totality", i.e.  embedded into its ambiance, I would be scared to differentiate myself. There is no hermetic 'skin' - it is transitional effects transcending back and forth, we just do not observe those outside the 'topical boundaries' of our actual observation (model, as I call it).

DN:  Yes: all is relation (ultimately self-relation, IMO), and 'boundaries' merely delimit what is 'observable'.  In this context, what do you think about Colin's TPONOG post?

Regards

David

Bruno Marchal

unread,
Jun 28, 2007, 10:42:24 AM6/28/07
to everyth...@googlegroups.com

Le 21-juin-07, à 01:07, David Nyman a écrit :

>
> On Jun 5, 3:12 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
>
>> Personally I don' think we can be *personally* mistaken about our own
>> consciousness even if we can be mistaken about anything that
>> consciousness could be about.
>
> I agree with this, but I would prefer to stop using the term
> 'consciousness' at all.


Why?

> To make a decision (to whatever degree of
> certainty) about whether a machine possessed a 1-person pov analogous
> to a human one, we would surely ask it the same sort of questions one
> would ask a human. That is: questions about its personal 'world' -
> what it sees, hears, tastes (and perhaps extended non-human
> modalitiies); what its intentions are, and how it carries them into
> practice. From the machine's point-of-view, we would expect it to
> report such features of its personal world as being immediately
> present (as ours are), and that it be 'blind' to whatever 'rendering
> mechanisms' may underlie this (as we are).
>
> If it passed these tests, it would be making similar claims on a
> personal world as we do, and deploying this to achieve similar ends.
> Since in this case it could ask itself the same questions that we can,
> it would have the same grounds for reaching the same conclusion.
>
> However, I've argued in the other bit of this thread against the
> possibility of a computer in practice being able to instantiate such a
> 1-person world merely in virtue of 'soft' behaviour (i.e.
> programming). I suppose I would therefore have to conclude that no
> machine could actually pass the tests I describe above - whether self-
> administered or not - purely in virtue of running some AI program,
> however complex. This is an empirical prediction, and will have to
> await an empirical outcome.


Now I have big problems to understand this post. I must think ... (and
go).

Bye,

Bruno

http://iridia.ulb.ac.be/~marchal/

David Nyman

unread,
Jun 28, 2007, 11:56:29 AM6/28/07
to everyth...@googlegroups.com
On 28/06/07, Bruno Marchal <mar...@ulb.ac.be> wrote:

Hi Bruno

The remarks you comment on are certainly not the best-considered or most cogently expressed of my recent posts.  However, I'll try to clarify if you have specific questions.  As to why I said I'd rather not use the term 'consciousness', it's because of some recent confusion and circular disputes ( e.g. with Torgny, or about whether hydrogen atoms are 'conscious').  Some of the sometimes confused senses (not by you, I hasten to add!) seem to be:

1) The fact of possessing awareness
2) The fact of being aware of one's awareness
3) the fact of being aware of some content of one's awareness

So now I would prefer to talk about self-relating to a 1-personal 'world', where previously I might have said 'I am conscious', and that such a world mediates or instantiates 3-personal content.  I've tried to root this (in various posts) in a logically or semantically primitive notion of self-relation that could underly 0, 1, or 3-person narratives, and to suggest that such self-relation might be intuited as 'sense' or 'action' depending on the narrative selected. But crucially such nuances would merely be partial takes on the underlying self-relation, a 'grasp' which is not decomposable.

So ISTM that questions should attempt to elicit the machine's self-relation to such a world and its contents: i.e. it's 'grasp' of a reality analogous to our own.  And ISTM the machine could also ask itself such questions, just as we can, if indeed such a world existed for it.

I realise of course that it's fruitless to try to impose my jargon on anyone else, but I've just been trying to see whether I could become less confused by expressing things in this way.  Of course, a reciprocal effect might just be to make others more confused!

David

Bruno Marchal

unread,
Jun 29, 2007, 8:41:14 AM6/29/07
to everyth...@googlegroups.com

Le 28-juin-07, à 17:56, David Nyman a écrit :

> On 28/06/07, Bruno Marchal <mar...@ulb.ac.be> wrote:
>
> Hi Bruno
>
> The remarks you comment on are certainly not the best-considered or
> most cogently expressed of my recent posts. However, I'll try to
> clarify if you have specific questions. As to why I said I'd rather
> not use the term 'consciousness', it's because of some recent
> confusion and circular disputes ( e.g. with Torgny, or about whether
> hydrogen atoms are 'conscious').


I am not sure that in case of disagreement (like our "disagreement"
with Torgny), changing the vocabulary is a good idea. This will not
make the problem going away, on the contrary there is a risk of
introducing obscurity.

> Some of the sometimes confused senses (not by you, I hasten to add!)
> seem to be:
>
> 1) The fact of possessing awareness
> 2) The fact of being aware of one's awareness
> 3) the fact of being aware of some content of one's awareness


So just remember that in a first approximation I identify this with

1) being conscious (Dt?) .... for those who have
followed the modal posts. (Dx is for ~ Beweisbar (~x))
2) being self-conscious (DDt?)
3) being conscious of # (Dp?)

You can also have:

4) being self-conscious of something (DDp?).

Dp is really an abbreviation of the arithmetical proposition
~beweisbar ( '~p'). 'p' means the godel number describing p in the
language of the machine (by default it is the first order arithmetic
language).


>
> So now I would prefer to talk about self-relating to a 1-personal
> 'world', where previously I might have said 'I am conscious', and that
> such a world mediates or instantiates 3-personal content.

This is ambiguous. The word 'world' is a bit problematic in my setting.


> I've tried to root this (in various posts) in a logically or
> semantically primitive notion of self-relation that could underly 0,
> 1, or 3-person narratives, and to suggest that such self-relation
> might be intuited as 'sense' or 'action' depending on the narrative
> selected.

OK.


> But crucially such nuances would merely be partial takes on the
> underlying self-relation, a 'grasp' which is not decomposable.


Actually the elementary grasp are decomposable (into number relations)
in the comp setting.


>
> So ISTM that questions should attempt to elicit the machine's
> self-relation to such a world and its contents: i.e. it's 'grasp' of a
> reality analogous to our own. And ISTM the machine could also ask
> itself such questions, just as we can, if indeed such a world existed
> for it.

OK, but the machine cannot know that. As we cannot know that).

>
> I realise of course that it's fruitless to try to impose my jargon on
> anyone else, but I've just been trying to see whether I could become
> less confused by expressing things in this way. Of course, a
> reciprocal effect might just be to make others more confused!

It is the risk indeed.


Best regards,

Bruno

http://iridia.ulb.ac.be/~marchal/

David Nyman

unread,
Jun 29, 2007, 11:17:38 AM6/29/07
to everyth...@googlegroups.com
On 29/06/07, Bruno Marchal <mar...@ulb.ac.be> wrote:

BM:  I am not sure that in case of disagreement (like our "disagreement"
with Torgny), changing the vocabulary is a good idea. This will not
make the problem going away, on the contrary there is a risk of
introducing obscurity.

DN:  Yes. this seems to be the greater risk.  OK, in general I'll try to avoid it where possible.  I've taken note of the correspondences you provided for the senses of 'consciousness' I listed, and the additional one.

BM:  Actually the elementary grasp are decomposable (into number relations)
in the comp setting.

DN:  Then are you saying that 'action' can occur without 'sense' - i.e. that 'zombies' are conceivable?  This is what I hoped was avoided in the intuition that 'sense' and 'action' are, respectively, 1-p and 3-p aspects abstracted from a 0-p decomposable self-relation. The zombie then becomes merely a category error.  I thought that in COMP, number relations would be identified with this decomposable self-relation.  Ah.....but by 'decomposable', I think perhaps you mean that there are of course *different* number relations, so that this would then entail that there is a set of such fundamental relations such that *each* relation is individually decomposable, yes?

BM:  OK, but the machine cannot know that. As we cannot know that).

DN:  Do you mean that the machine can't know for sure the correspondence between its conscious world and the larger environment in which this is embedded and to which it putatively relates? Then I agree of course, and as you say, neither can we, for the sufficient reasons you have articulated.  So what I meant was that it would simply be in the same position that we are, which seems self-evident.

Anyway, as I said, the original post was probably ill advised, and I retract my quibbles about your terminology.

As to my point about whether such an outcome is likely vis-a-vis an AI program, it wasn't of course because you made any claims on this topic, but stimulated by another thread.  My thought goes as follows.  I seem to have convinced myself that, on the COMP assumption that *I* am such a machine, it is possible for other machines to instantiate conscious computations.  Therefore it would be reasonable for me to attribute consciousness to a machine that passed certain critical tests, though not such that I could definitely know or prove that it was conscious.  Nonetheless, such quibbles don't stop us from undertaking some empirical effort to develop machines with consciousness.  Two ways of doing this seem apparent.  First, to copy an existing such system ( e.g. a human) at an appropriate substitution level (as in your notorious gedanken experiment).  Second, to arrange for some initial system to undergo a process of 'psycho-physical' evolution (as humans have done) such that its 'sense' and 'action' narratives 'self-converge' on a consistent 1p-3p interface, as in our own case. 

In either of these cases, 'sense' and 'action' narratives 'self-converge', rather than being 'engineered', and any imputation of consciousness ( i.e. the attribution of semantics to the computation) continues to be 1p *self-attribution*, not a provable or definitely knowable 3p one.  The problem then seems to be: is there in fact a knowable method to 'design' all this into a system from the outside: i.e. a way to start from an external semantic attribution (e.g. an AI program) and then 'engineer' the sense and action syntactics of the instantiation in such a way that they converge on a consistent semantic interpretation from either 1p or 3p pov?  IOW, so that a system thus engineered would be capable of passing the same critical tests achievable by the first two types.  I can't see that we possess even a theory of how this could be done, and as somebody once said, there's nothing so practical as a good theory.  This is why I expressed doubt in the empirical outcome of any AI programme approached in this manner.  ISTM that references to Moore's Law etc. in this context are at present not much more than promissory notes written in invisible ink on transparent paper.

David.
Reply all
Reply to author
Forward
0 new messages