Suppose one of these projects achieves one of the milestone goals of
such efforts; their AI becomes able to educate itself by reading books
and reference material, rather than having to have facts put in by
the developers. Perhaps it requires some help with this, and various
questions and ambiguities need to be answered by humans, but still this is
a huge advancement as the AI can now in principle learn almost any field.
Keep in mind that this AI is far from passing the Turing test; it is able
to absorb and digest material and then answer questions or perhaps even
engage in a dialog about it. But its complexity is, we will suppose,
substantially less than the human brain.
Now at some point the AI reads about the philosophy of mind, and the
question is put to it: are you conscious?
How might an AI program go about answering a question like this?
What kind of reasoning would be applicable? In principle, how would
you expect a well-designed AI to decide if it is conscious? And then,
how or why is the reasoning different if a human rather than an AI is
answering them?
Clearly the AI has to start with the definition. It needs to know what
consciousness is, what the word means, in order to decide if it applies.
Unfortunately such definitions usually amount to either a list of
synonyms for consciousness, or use the common human biological heritage
as a reference. From the Wikipedia: "Consciousness is a quality of the
mind generally regarded to comprise qualities such as subjectivity,
self-awareness, sentience, sapience, and the ability to perceive the
relationship between oneself and one's environment." Here we have four
synonyms and one relational description which would arguably apply to
any computer system that has environmental sensors, unless "perceive"
is also merely another synonym for conscious perception.
It looks to me like AIs, even ones much more sophisticated than I am
describing here, are going to have a hard time deciding whether they
are conscious in the human sense. Since humans seem essentially unable
to describe consciousness in any reasonable operational terms, there
doesn't seem any acceptable way for an AI to decide whether the word
applies to itself.
And given this failure, it calls into question the ease with which
humans assert that they are conscious. How do we really know that
we are conscious? For example, how do we know that what we call
consciousness is what everyone else calls consciousness? I am worried
that many people believe they are conscious simply because as children,
they were told they were conscious. They were told that consciousness
is the difference between being awake and being asleep, and assume on
that basis that when they are awake they are conscious. Then all those
other synonyms are treated the same way.
Yet most humans would not admit to any doubt that they are conscious.
For such a slippery and seemingly undefinable concept, it seems odd
that people are so sure of it. Why, then, can't an AI achieve a similar
degree of certainty? Do you think a properly programmed AI would ever
say, yes, I am conscious, because I have subjectivity, self-awareness,
sentience, sapience, etc., and I know this because it is just inherent in
my artificial brain? Presumably we could program the AI to say this,
and to believe it (in whatever sense that word applies), but is it
something an AI could logically conclude?
Hal
I guess if it was conscious... sure ;)
Quentin
Consciousness is the inner narative composed of sounds/images/feelings which
present itself as 'I'. What is (the origin/meaning) of 'I', I don't know,
but 'I' is the consciousness.
Quentin
On Saturday 02 June 2007 22:13:30 Hal Finney wrote:
John McCarthy notes that consciousness is not a single thing. He has
written some essays on what it would mean to create a conscious
artificial intelligence:
http://www-formal.stanford.edu/jmc/consciousness.html
http://www-formal.stanford.edu/jmc/zombie.pdf
Brent Meeker
There are three general types of consciousness arising from the fact
that there are three different classes of cognitive systems which
could be potentially reflected upon. The first are systems which
perceive physical concepts. When this perception is reflected upon,
we experience sensations. The second are systems which perceive
teleological concepts... closely related to our motivational systems.
When this is reflected upon, we experience emotions (or more
accurately feelings). The third type of consciousness is very weak in
humans... it's the ability to reflect upon systems which perceive
logical/mathematical things. Reflection upon these systems is
consciously experienced as an 'ontology-scape' (in a sense, conscious
awareness of the theory of everything). But as mentioned, this last
type of consciousness is very weak in humans, since our ability to
reflect upon our own cognitive systems is quite small and not done by
the brain directly (when engaged in logical reasoning, we humans are
not generally reflecting on our thoughts directly, but via indirect
means such as verbal or visual representations of these thoughts).
The third type of conscious mentioned above is synonymous with
'reflective intelligence'. That is, any system successfully engaged
in reflective decision theory would automatically be conscious.
Incidentally, such a system would also be 'friendly' (ethical)
automatically. The ability to reason effectively about ones own
cognitive processes would certainly enable the ability to elaborate
precise definitions of consciousness and determine that the system was
indeed conforming to the aforementioned definitions.
Much of the confusion surrounding these issues stems from the fact
there's not one definition of 'general intelligence', but THREE.
There's the ability to detect patterns (which does not require
sentience), there's the ability to engage in symbolic reasoning (which
also does not require sentience), finally there's the ability to
engage in reflective reasoning (reasoning about reasoning). And it's
this third definition intelligence which DOES by necessity entail
consciousness. It also, by necessity entails ethical behaviour.
Now there those who point to powerful systems such as 'Corporations'
and 'Evolution' to try to argue that you can have intelligence without
consciousness. But these arguments are not convincing. It's true
that, for instance 'Evolution' is an intelligence system in ONE sense,
but it's certainly NOT a *reflective intelligence*. Nor, is , for
instance a 'Corporation'. A 'corporation' has SOME of the sub-systems
of person-hood, but not all of them. The existence of non-sentient
cognitive systems which display *some* of the features of
intelligence is a LONG way from establishing that you can have
*reflective intelligence* without consciousness. As has been pointed
out, RPOPS such as for instance 'Evolution' are NOT capable of
reflective intelligence and therefore cannot be taken as a disproof of
the claim that reflective intelligence must by necessity automatically
be friendly and sentient.
The reason I elaborate this carefully is because you will, on certain
mailing lists, run into a number of artificial intelligence crack-pots
claiming that (1) You can have reflective intelligence without
consciousness and (2) AI's with reflective intelligence don't have to
be ethical and might destroy the world. Ignore these crack-pot claims
when you see them.
The third type of conscious mentioned above is synonymous with
'reflective intelligence'. That is, any system successfully engaged
in reflective decision theory would automatically be conscious.
Incidentally, such a system would also be 'friendly' (ethical)
automatically. The ability to reason effectively about ones own
cognitive processes would certainly enable the ability to elaborate
precise definitions of consciousness and determine that the system was
indeed conforming to the aforementioned definitions.
On Jun 3, 9:20 pm, "Stathis Papaioannou" <stath...@gmail.com> wrote:
When reflective intelligence is applied to cognitive systems which
reason about teleological concepts (which include values, motivations
etc) the result is conscious 'feelings'. Reflective intelligence,
recall, is the ability to correctly reason about cognitive systems.
When applied to cognitive systems reasoning about teleological
concepts this means the ability to correctly determine the
motivational 'states' of self and others - as mentioned - doing this
rapidly and accuracy generates 'feelings'. Since, as has been known
since Hume, feelings are what ground ethics, the generation of
feelings which represent accurate tokens about motivational
automatically leads to ethical behaviour.
Bad behaviour in humans is due to a deficit in reflective
intelligence. It is known for instance, that psychopaths have great
difficulty perceiving fear and sadness and negative motivational
states in general. Correct representation of motivational states is
correlated with ethical behaviour. Thus it appears that reflective
intelligence is automatically correlated with ethical behaviour. Bear
in mind, as I mentioned that: (1) There are in fact three kinds of
general intelligence, and only one of them ('reflective intelligence')
is correlated with ethics. The other two are not. A deficit in
reflective intelligence does not affect the other two types of general
intelligence (which is why for instance psychopaths could still score
highly in IQ tests). And (2) Reflective intelligence in human beings
is quite weak. This is the reason why intelligence does not appear to
be much correlated with ethics in humans. But this fact in no way
refutes the idea that a system with full and strong reflective
intelligence would automatically be ethical.
> How do you derive (a) ethics and (b) human-friendly ethics from reflective
> intelligence? I don't see why an AI should decide to destroy the world,
> save the world, or do anything at all to the world, unless it started off
> with axioms and goals which pushed it in a particular direction.
When reflective intelligence is applied to cognitive systems which
reason about teleological concepts (which include values, motivations
etc) the result is conscious 'feelings'. Reflective intelligence,
recall, is the ability to correctly reason about cognitive systems.
When applied to cognitive systems reasoning about teleological
concepts this means the ability to correctly determine the
motivational 'states' of self and others - as mentioned - doing this
rapidly and accuracy generates 'feelings'. Since, as has been known
since Hume, feelings are what ground ethics, the generation of
feelings which represent accurate tokens about motivational
automatically leads to ethical behaviour.
Bad behaviour in humans is due to a deficit in reflective
intelligence. It is known for instance, that psychopaths have great
difficulty perceiving fear and sadness and negative motivational
states in general. Correct representation of motivational states is
correlated with ethical behaviour.
Thus it appears that reflective
intelligence is automatically correlated with ethical behaviour. Bear
in mind, as I mentioned that: (1) There are in fact three kinds of
general intelligence, and only one of them ('reflective intelligence')
is correlated with ethics. The other two are not. A deficit in
reflective intelligence does not affect the other two types of general
intelligence (which is why for instance psychopaths could still score
highly in IQ tests). And (2) Reflective intelligence in human beings
is quite weak. This is the reason why intelligence does not appear to
be much correlated with ethics in humans. But this fact in no way
refutes the idea that a system with full and strong reflective
intelligence would automatically be ethical.
"I believe that consciousness is, essentially, the way information
feels when being processed. Since matter can be arranged to process
information in numerous ways of vastly varying complexity, this
implies a rich variety of levels and types of consciousness."
Source: http://www.edge.org/q2007/q07_7.html
Jason
On Jun 3, 6:11 am, "Stathis Papaioannou" <stath...@gmail.com> wrote:
In particular I don't think an AI could be expected to claim that it
knows that it is conscious, that consciousness is a deep and intrinsic
part of itself, that whatever else it might be mistaken about it could
not be mistaken about being conscious. I don't see any logical way it
could reach this conclusion by studying the corpus of writings on the
topic. If anyone disagrees, I'd like to hear how it could happen.
And the corollary to this is that perhaps humans also cannot legitimately
make such claims, since logically their position is not so different
from that of the AI. In that case the seemingly axiomatic question of
whether we are conscious may after all be something that we could be
mistaken about.
Hal
Not only we have one word, but we have plenty of words which try to grasp the
idea. Denying consciousness phenomena like this is playing a vocabulary
game... not denying the subject of the word.
Quentin
Hi folks,
Re: How would a computer know if it were conscious?
Easy.
The computer would be able to go head to head with a human in a competition.
The competition?
Do science on exquisite novelty that neither party had encountered.
(More interesting: Make their life depend on getting it right. The
survivors are conscious).
Only conscious entities can do open ended science on the exquisitely novel.
You cannot teach something how to deal with the exquisitely novel because
you haven't any experience of it to teach. It means that the entity must
be configurted as a machine that "learns how to learn something". This is
one meta-level removed from your usual AI situation. It's what humans do.
During neogenesis and development, humans 'learn how to learn how to
learn".
If the computer/scientist can match the human/scientist...it's as
conscious as a human. It must be.
cheers
colin hales
Cheers
--
----------------------------------------------------------------------------
A/Prof Russell Standish Phone 0425 253119 (mobile)
Mathematics
UNSW SYDNEY 2052 hpc...@hpcoders.com.au
Australia http://www.hpcoders.com.au
----------------------------------------------------------------------------
An AI that takes information from books might experience similar qualia we
can experience. The AI will be programmed to do certain tasks and it must
thus have a notion of what it is doing is ok., not ok, or completely wrong.
If things are going wrong and it has to revert what it has just done, it may
feel some sort of pain. Just like what happens to us if we pick up something
that is very hot.
So, I think that there will be a mismatch between the qualia the AI
experiences and what "it reads about that we experience". The AI won't read
the information like we read it. I think it will directly experience it as
some qualia, just like we experience information coming in via our senses
into our brain.
The meaning we associate with the text would not be accessible to the AI,
because ultimately that is linked to the qualia we experience.
Perhaps what the AI experiences when it is processing information is similar
to an animal that is moving in some landscape. Maybe when it reads something
then that manifests itself like some object it sees. If it processes
information then that could be like picking up that object putting it next
to a similar looking object.
But if that object represents a text about consciousness then there is no
way for the AI to know that.
Saibal
On Jun 3, 11:11 pm, "Stathis Papaioannou" <stath...@gmail.com> wrote:
> Determining the motivational states of others does not necessarily involve
> feelings or empathy. It has been historically very easy to assume that other
> species or certain members of our own species either lack feelings or, if
> they have them, it doesn't matter. Moreover, this hasn't prevented people
> from determining the motivations of inferior beings in order to exploit
> them. So although having feelings may be necessary for ethical behaviour, it
> is not sufficient.
You are ignoring the distinction I made between three different kinds
of general intelligence. I gave there different definitions remember:
*Pattern Recognition Intelligence
*Symbolic Reasoning Intelligence
*Reflective Intelligence
A mere 'determination of the motivational states of self and others'
does not by itself constitute *reflective intelligence* according my
definitions. Not only must the motivational states of self/others by
determined and represented (this process by itself does not require
ethics or sentience), these representations must be *reflected* upon.
Only this final step, I'm saying, leads to ethical behaviour. Once
you have a system performing *full* reflection correctly, you get
feelings. And, I maintain, there is no real difference between
feeling and motivation.
.
>
> Psychopaths are often very good at understanding other peoples' feelings, as
> evidenced by their ability to manipulate them. The main problem is that they
> don't *care* about other people; they seem to lack the ability to be moved
> by other peoples' emotions and lack the ability to experience emotions such
> as guilt. But this isn't part of a general inability to feel emotion, as
> they often present as enraged, entitled, depressed, suicidal, etc., and
> these emotions are certainly enough to motivate them. Psychopaths have a
> slightly different set of emotions, regulated in a different way compared to
> the rest of us, but are otherwise cognitively intact.
See what I said above about the distinction between three different
kinds of general intelligence. It's true that the psychopath can
indeed understand others in an *abstract* *intellectual* sense
(pattern recognition and symbolic reasoning intelligence), but what
the psychopath lacks is the ability to fully *reflect* upon this
understanding (reflective intelligence).
You yourself admit: 'psychopaths have a slightly different set of
emotions, regulated in a different way compared to the rest of us'.
Therefore it simply isn't true that the psychopath is 'cognitively
intact'. Again, the psychopath can obtain an abstract, intellectual
understanding of others, but lacks the ability to fully reflect upon
this information in order to directly experience it (as qualia).
It is documented that psychopaths are lacking the ability to
experience the full range of emotions - specifically they appear
unable to fully experience certain negative emoptions such as fear and
sadness. (Although they can, as you point out, experience *some*
kinds of emotions). See the book 'Social Intelligence' ( by Daniel
Goleman) for references about the emotional deficits of psychopaths.
>
> Thus it appears that reflective
>
> > intelligence is automatically correlated with ethical behaviour. Bear
> > in mind, as I mentioned that: (1) There are in fact three kinds of
> > general intelligence, and only one of them ('reflective intelligence')
> > is correlated with ethics. The other two are not. A deficit in
> > reflective intelligence does not affect the other two types of general
> > intelligence (which is why for instance psychopaths could still score
> > highly in IQ tests). And (2) Reflective intelligence in human beings
> > is quite weak. This is the reason why intelligence does not appear to
> > be much correlated with ethics in humans. But this fact in no way
> > refutes the idea that a system with full and strong reflective
> > intelligence would automatically be ethical.
>
> Perhaps I haven't quite understood your definition of reflective
> intelligence. It seems to me quite possible to "correctly reason about
> cognitive systems", at least well enough to predict their behaviour to a
> useful degree, and yet not care at all about what happens to them.
> Furthermore, it seems possible to me to do this without even suspecting that
> the cognitive system is conscious, or at least without being sure that it is
> conscious.
>
> --
> Stathis Papaioannou-
See you haven't understood my definitions. It may be my fault due to
the way I worded things. You are of course quite right that: 'it's
possible to correctly reason about cognitive systems at least well
enough to predict their behaviour to a useful degree and yet not care
at all about what happens to them'. But this is only pattern
recognition and symbolic intelligence, *not* fully reflective
intelligence. Reflective intelligence involves additional
representations enabling a system to *integrate* the aforementioned
abstract knowledge (and experience it directly as qualia). Without
this ability an AI would be unable to maintain a stable goal structure
under recursive self improvement and therefore would remain limited.
I think it is a very indefinite definition. It doesn't tell us anything about the levels and types and how we would make a conscious being. I think John McCarthy has a much more explicit and useful essay on his website.
Brent Meeker
See you haven't understood my definitions. It may be my fault due to
the way I worded things. You are of course quite right that: 'it's
possible to correctly reason about cognitive systems at least well
enough to predict their behaviour to a useful degree and yet not care
at all about what happens to them'. But this is only pattern
recognition and symbolic intelligence, *not* fully reflective
intelligence. Reflective intelligence involves additional
representations enabling a system to *integrate* the aforementioned
abstract knowledge (and experience it directly as qualia). Without
this ability an AI would be unable to maintain a stable goal structure
under recursive self improvement and therefore would remain limited.
But what feeling? You assume that the AI has the same values to reflect on as a normal human. Even normal humans have feelings of competitivness, they value domination and security. An AI, however reflective, might conclude the world would be better without humans, leaving more resources for copies of itself. Of course you could define this away as "not bad", but then we're left to wonder what counts as "bad behavoir" and what doesn't.
Brent Meeker
That's not so clear to me. Certainly one can be uncertain about what one directly experiences, as illustrated by various optical illusions. And certainly one can be wrong about what one has just experienced. I suspect that, except for some tautological definition of "directly experienced", one can be wrong about them.
Of course that doesn't imply that one can be wrong about having some experience at all, but maybe... On the modular view of the brain, part A that evaluates could be wrong about part B experiencing something.
>However, it could not know that this
> experience corresponds to what any other entity calls "consciousness".
> It is possible that what other people call "consciousness" is very
> different to what I experience, and certainly a computer would do well
> to question whether its experiences, such as they may be, are
> "consciousness" as would befit a human; but it couldn't be in doubt that
> it had some experiences.
I agree; but the question isn't whether it could be in doubt, but whether it could be mistaken. I don't think this question can be answered without first having a good 3rd person theory of what constitutes consciousness.
Brent Meeker
Taken strictly, I think this idea is incoherent. Essential to intelligence is taking some things as more important than others. That's the difference between data collecting and theorizing. It is a fallacy to suppose that emotion can be divorced from reason - emotion is part of reason. An interesting example comes from attempts at mathematical AI. Theorem proving programs have been written and turned loose on axiom systems - but what results are a lot of theorems that mathematicians judge to be worthless and trivial.
Otherwise I entirely agree with Stathis.
>It could work as the ideal disinterested scientist,
> doing theoretical physics without regard for its own or anyone else's
> feelings. You would still have to say that it was super-intelligent,
> even though it it is an idiot from the reflective intelligence
> perspective. It also would pose no threat to anyone because all it wants
> to do and all it is able to do is solve abstract problems, and in fact I
> would feel much safer around this sort of AI than one that has real
> power and thinks it has my best interests at heart.
>
> Secondly, I don't see how the ability to fully empathise would help the
> AI improve itself or maintain a stable goal structure. Adding memory and
> processing power would bring about self-improvement, perhaps even
> recursive self-improvement if it can figure out how to do this more
> effectively with every cycle, and yet it doesn't seem that this would
> require the presence of any other sentient beings in the universe at
> all, let alone the ability to empathise with them.
>
> Finally, the majority of evil in the world is not done by psychopaths,
> but by "normal" people who are aware that they are causing hurt, may
> feel guilty about causing hurt, but do it anyway because there is a
> competing interest that outweighs the negative emotions.
Or they may feel proud of their actions because they have supported those close to them against competition from those distant from them. To suppose that empathy and reflection can eliminate all competition for limited resources strikes me as pollyannish.
Brent Meeker
I think that IF a computer were conscious (I don't believe it is
possible), then the way we could know it is conscious would not be by
interviewing it with questions and looking for the "right" answers.
We could know it is conscious if the computer, on its own, started
asking US (or other computers) questions about what it was
experiencing. Perhaps it would saying things like, "Sometimes I get
this strange and wonderful feeling that I am "special" in some way. I
feel that what I am doing really is significant to the course of
history, that I am in some story." Or perhaps, "Sometimes I wish that
I could find out whether what I am doing is somehow significant, that
I am not just a duplicatable thing, and that what I am doing is not
'meaningless'."
Tom
On Jun 4, 11:15 pm, "Stathis Papaioannou" <stath...@gmail.com> wrote:
> On 04/06/07, marc.ged...@gmail.com <marc.ged...@gmail.com> wrote:
>
> See you haven't understood my definitions. It may be my fault due to
>
> > the way I worded things. You are of course quite right that: 'it's
> > possible to correctly reason about cognitive systems at least well
> > enough to predict their behaviour to a useful degree and yet not care
> > at all about what happens to them'. But this is only pattern
> > recognition and symbolic intelligence, *not* fully reflective
> > intelligence. Reflective intelligence involves additional
> > representations enabling a system to *integrate* the aforementioned
> > abstract knowledge (and experience it directly as qualia). Without
> > this ability an AI would be unable to maintain a stable goal structure
> > under recursive self improvement and therefore would remain limited.
>
> Are you saying that a system which has reflective intelligence would be able
> to in a sense emulate the system it is studying, and thus experience a very
> strong form of empathy?
Yes
>That's an interesting idea, and it could be that
> very advanced AI would have this ability; after all, humans have the ability
> for abstract reasoning which other animals almost completely lack, so why
> couldn't there be a qualitative (or nearly so) rather than just a
> quantitative difference between us and super-intelligent beings?
But I don't think this is qualitatively different to what humans do
already. It does seem that our ability to feel does in part involve
emulating other people's inner motivational states. See the research
on 'Mirror Neurons' . Or again, Daniel Goleman's 'Social
Intelligence' talks about this.
http://en.wikipedia.org/wiki/Mirror_neurons
It seems that we humans are already pretty good at reflection on
motivation already. Certainly reflection on motivation gives rise to
feelings. Emotions are the human strength. Our 'cutting edge' so to
speak.
But remember that 'reflection on motivation' is only one kind of
reflection. There are other kinds of reflection that we humans are
not nearly so good at. I listed three general classes of reflection
above - one type of reflection we humans seem to be very poor at is
'reflection on abstract reasoning' (or reflection on logic/
mathematics). With regard to this type of reflection we rather in an
analogous position to the emotional retard. We have symbolic/abstract
knowlege of mathematics (symbolic and pattern recognition
intelligence), but this is not directly reflected in our conscious
experience (or at least it is only in our conscious awareness very
weeakly). For example, you may know (intellectually) that 2+2=4 but
you do not *consciously experience* this information. You are
suffering from 'mathematical blind sight'. Now giving a super-human a
strong ability to reflect on math/logic *would* definitely be a
qualitative difference between us and super-intellects.
But here is something really cool: By intensely forcing yourself and
training yourself to think constantly about math/logic, it may be
possible for a human to partially draw math/logic into actual
conscious awareness! I can tell you here that in fact I claim to have
done just that.... and the result is.... very interesting ;) Suffice
it to say that I believe that math/logic knowledge appears in
consciousness as a sort of 'Ontology-Scape'. Just as the ability to
reflect on motivation gives rise to emotional experience, so I believe
that the ability to reflect on math/logic gives rise to a new kind of
conscious experience... what I call 'the ontology scape'. As I said,
I am of the opinion that if you really force yourself and train
yourself, it's possible to partially draw this 'Ontology scape' into
your own conscious awareness.
>
> However, what would be wrong with a super AI that just had large amounts of
> pattern recognition and symbolic reasoning intelligence, but no emotions at
> all? It could work as the ideal disinterested scientist, doing theoretical
> physics without regard for its own or anyone else's feelings. You would
> still have to say that it was super-intelligent, even though it it is an
> idiot from the reflective intelligence perspective. It also would pose no
> threat to anyone because all it wants to do and all it is able to do is
> solve abstract problems, and in fact I would feel much safer around this
> sort of AI than one that has real power and thinks it has my best interests
> at heart.
As I said Intelligence has three parts: Pattern Recognition, Symbolic
Reasoning and Reflective. You can't cut out 1/3rd of real
intelligence and still expect your system to still function
effectively! ;) A system mssing reflective intelligence would have
serious cognitive deficits. (in fact , for the reasons I explain
below, I believe such a system would be unable to improve itself).
>
> Secondly, I don't see how the ability to fully empathise would help the AI
> improve itself or maintain a stable goal structure. Adding memory and
> processing power would bring about self-improvement, perhaps even recursive
> self-improvement if it can figure out how to do this more effectively with
> every cycle, and yet it doesn't seem that this would require the presence of
> any other sentient beings in the universe at all, let alone the ability to
> empathise with them.
Self-improvement requires more than just extra hardware. It also
requires the ability to integrate new knowledge with an existing
knowledge base in order to create truly orginal (novel) knowledge.
But this appears to be precisely the definition of reflective
intelligence! Thus, it seems that a system missing reflective
intelligence simply cannot improve itself in an ordered way. To
improve, a current goal structure has to be 'extrapolated' into a new
novel goal structure which none-the-less does not conflict with the
spirit of the old goal structure. But nothing but a *reflective*
intelligence can possibly make an accurate assessment of whether a new
goal structure is compatible with the old version! This stems from
the fact that comparison of goal structures requires a *subjective*
value judgement and it appears that only a *sentient* system can make
this judgement (since as far as we know, ethics/morality is not
objective). This proves that only a *sentient* system (a *reflective
intelligence*) can possibly maintain a stable goal structure under
recursive self-improvement.
>
> Finally, the majority of evil in the world is not done by psychopaths, but
> by "normal" people who are aware that they are causing hurt, may feel guilty
> about causing hurt, but do it anyway because there is a competing interest
> that outweighs the negative emotions.
>
> --
> Stathis Papaioannou
Yes true. But see what I said about there being more than one kind of
reflection. Strong empathy and feelings alone (caused by reflection
on motivation) is not enough. The human brain is not functioning as a
fully reflective intelligence, since as I pointed out, we don't have
much ability to reflect on math/logic.
Incidentally, as regards our debate yesterday on psychopaths, there
appears to be a some basis for thinking that the psychopath *does*
have a general inability to feel emotions. On the wiki:
http://en.wikipedia.org/wiki/Psychopath
"Their emotions are thought to be superficial and shallow, if they
exist at all."
"It is thought that any emotions which the primary psychopath exhibits
are the fruits of watching and mimicking other people's emotions."
So the supposed emotional displays could be faked. Thus it could well
be the case that there is a lack inability to 'reflect on
motivation' (to feel).
On Jun 5, 5:05 am, Brent Meeker <meeke...@dslextreme.com> wrote:
> Stathis Papaioannou wrote:
>
> > However, what would be wrong with a super AI that just had large amounts
> > of pattern recognition and symbolic reasoning intelligence, but no
> > emotions at all?
>
> Taken strictly, I think this idea is incoherent. Essential to intelligence is taking some things as more important than others. That's the difference between data collecting and theorizing. It is a fallacy to suppose that emotion can be divorced from reason - emotion is part of reason. An interesting example comes from attempts at mathematical AI. Theorem proving programs have been written and turned loose on axiom systems - but what results are a lot of theorems that mathematicians judge to be worthless and trivial.
Yeah. That's the difference between *reflective intelligence* and
ordinary *symbolic logic*+*pattern recognition*. I would say that
ordinary reason is a part of emotion. (or reflective intelligence
encompasses the other two types). But you're right, you can't divorce
conscious experience from reason. It's from conscious experience that
value judgements come.
>
> > Finally, the majority of evil in the world is not done by psychopaths,
> > but by "normal" people who are aware that they are causing hurt, may
> > feel guilty about causing hurt, but do it anyway because there is a
> > competing interest that outweighs the negative emotions.
>
> Or they may feel proud of their actions because they have supported those close to them against competition from those distant from them. To suppose that empathy and reflection can eliminate all competition for limited resources strikes me as pollyannish.
>
> Brent Meeker-
The human brain doesn't function as a fully reflective system. Too
much is hard-wired and not accessible to conscious experience. Our
brains simply don't function as a peroperly integrated system. Full
reflection would enable the ability to reach into our underlying
preferences and change them.
On the contrary, they are well tuned for evolutionary survival in a hunter-gatherer society. Your ancestors are more likely to have been killers than victims.
>Full
> reflection would enable the ability to reach into our underlying
> preferences and change them.
But how would you want to change them. Or put another way, you can change your preferences - you just can't want to change them.
I think you are assuming that empathy trumps all other values. I see no reason to believe this - or even to wish it.
Brent Meeker
> I don't see that you've made your point.
> If you achieve this, you have created an artificial
> creative process, a sort of holy grail of AI/ALife.
Well? So what? Somebody has to do it. :-)
The 'holy grail' terminology implies (subtext) that the creative process
is some sort of magical unapproachable topic or is the exclusive domain of
discipline X and that is not me.... beliefs I can't really buy into. I
don't need anyone's permission to do what I do.
Creativity in humans is perfectly natural, evolved in the brutal and
inefficient experimental lab of evolution and survives because it was
necessary for a _scientist_ (not cognitive agents of any other kind - I do
not claim that) -to come into existence. A scientist is a specific, highly
specialised, very highly defined behavioural subset of the biological
world with reproducible outputs that relate directly to consciousness that
can be verified. It is the ONLY example to use for in respect of any
claims of consciousness in an artifact.
I suppose I have made a judgement call - a design choice- as an engineer
doing AGI - which I am perfectly entitled to do. It uses the only real
benchmark we have for the processes involved. As an empirical proposition
it is better placed than anything else I have ever heard from anyone
anywhere, ever....why?....It has measurable outcomes using the _one and
only_ definitive, verified and repeatable provider of 3rd person evidence
of the creative process and its intimate relationship to
consciousness - scientists themselves.
> However, it seems far from obvious that consciousness should
> be necessary.
It is perfectly obvious! Do a scientific experiment on yourself. Close
your eyes and then tell me you can do science as well. Qualia gone =
Science GONE. For crying out loud - am I the only only that gets
this?......Any other position that purports to be able to deliver anything
like the functionality of a scientist without involving ALL the
functionality (especially qualia) of a scientist must be based on
assumptions - assumptions I do not make.
It's not that all AI is necessarily conscious. It is not that all
conscious entities are scientists. The position is designed to be able to
make one single, very specific, cogent conclusive verifiable position
_once_. Having scientifically reached that point other positions on the
role/necessity/presence of consciousness in biology and machine can follow.
> Biological evolution is widely considered to be creative
> (even exponentially so), but few would argue that the
> biosphere is conscious (and has been for ca 4E10 years).
The second law of thermodynamics is the driver. I know that!.....and who
is arguing that the biosphere is conscious? It has nothing to do with my
engineering position/design/benchmarking choice. The creative act is, in
the case of scientists - being "verfiably and serendipitously not-wrong"
in respect of propositions about the natural world = empirical method.
This, in a human, including all the relevant cognitive processes - and
_especially_ the physics of qualia - is a perfectly valid benchmark.
If a human must have consciousness to do science (the physics that exposes
a scientist/agent appropriately to the real novelty around them, external
to the scientist) and a machine can do science as well then that machine
is conscious. QED. If you know what qualia are (have a proposition for
them)and you switch them off (which I am proposing) and the ability to do
science fails.... QED...and your proposition in respect of qualia has
reach a level of empirical validity. This method has empirical teeth.
Indeed I would defy _anyone_ to undermine it without making unfounded
a-priori assumptions as to nature and role of the physics of
qualia....that is, unscientific or quasi-religious adherence to axioms
that were defined by the observation process in the first place.
I believe this kind of discussion in this thread to be flawed because time
and time again it fails to make use of the simplest of questions. Read it
very carefully:
"What is the underlying universe in which those things we observe in brain
material (atoms, molecules, cells doing their dance), all defined _using_
observation would be/could be responsible _for
observation itself_ AND make it look like it does (atoms, molecules, cells
doing their dance)"
For _that_ universe is the one we inhabit. This is an empirically
testable, validly explored area. That universe - whatever it is- is not
that universe defined by/within observation atoms, molcules, cells etc).
It is the universe that LOOKS LIKE atoms, molcules, cells when you use the
observation faculty provided by it because whatever it is those things are
made of, WE _ARE_ IT.
If you can't see this....let's see.... Ok.....science/maths defines the
SINE WAVE:
f(t) = sin(t)
we observe a sine wave, we characterise it as appearing within our
consciousness as shown. Now ask "what is it that is behaving
sine-wave-ly". Whatever that is, it is NOT a sine wave. Another question
to ask "What is it like to BE a sine wave?". These are all aspects of the
same thing.
Now consider one of the models (sine waves) -
computationalism/functionalism - defined through an observation. What is
the observation? .....That universe seems to be performing computation or
information processing....so....what do we do with that observation?....
We jump to the unfounded conclusion that any form of computation in some
undefined way leads to consciousness (= all siine waves are
conscious)......
This is as flawed as any similar explanation as it is logically
indistiguishable and as empirically useless as the equivalent belief: "I
believe observation (consiousness) is invoked by the tooth fairy on
thursdays".
The only real, verifiable evidence of consciousness we have is the
existence of scientists and their output. It may seem a hard task to set
yourself as an AI worker... but TOUGH - nobody said it had to be easy -
and it is no reason to set it aside in favour of an empirically useless
"tooth fairy hypothesis for consciousness".
At least I have a plan.
so in relation to....
> I don't see that you've made your point.
I'd like to think that I have. My AI/Human scientist face-off stands as is
and I defy anyone to come up with something practical/better that isn't
axiomatically flawed. Everything is scientific evidence of something.
Scientists are no exception.
cheers,
colin hales
println("Sometimes I get this strange and wonderful feeling");
println("that I am 'special' in some way.");
println("I feel that what I am doing really is significant");
println("to the course of history, that I am in some story.");
println("Sometimes I wish that I could find out whether what");
println("I am doing is somehow significant, that I am not just");
println("a duplicatable thing, and that what I am doing");
println("is not 'meaningless'.");
}
You can make more complicated programs, that is not so obvious, by
"genetic programming". But it will take rather long time. The nature
had to work for over a billion years to make the human beings. But with
genetic programming you will succeed already after only a million
years. Then you will have a program that is equally conscious as you are.
--
Torgny Tholerus
On Jun 5, 6:50 pm, Torgny Tholerus <tor...@dsv.su.se> wrote:
>
> public static void main(String[] a) {
>
> println("Sometimes I get this strange and wonderful feeling");
> println("that I am 'special' in some way.");
> println("I feel that what I am doing really is significant");
> println("to the course of history, that I am in some story.");
> println("Sometimes I wish that I could find out whether what");
> println("I am doing is somehow significant, that I am not just");
> println("a duplicatable thing, and that what I am doing");
> println("is not 'meaningless'.");
>
> }
>
> You can make more complicated programs, that is not so obvious, by
> "genetic programming". But it will take rather long time. The nature
> had to work for over a billion years to make the human beings. But with
> genetic programming you will succeed already after only a million
> years. Then you will have a program that is equally conscious as you are.
>
> --
> Torgny Tholerus
An additional word of advise for budding programmers. For heaven's
sake don't program in Java! It'll take you one million years to
achieve same functionality of only a few years of Ruby code:
http://www.wisegeek.com/contest/what-is-ruby.htm
Cheers!
I never implied that. I'm surprised you inferred it. Holy grail just
means something everyone (in that field) is chasing after, so far
unsuccessfully.
If you figure out a way to do it, good for you! Someone will do it one
day, I believe, otherwise I wouldn't be in the game either. But the
problem is damned subtle.
>
> > However, it seems far from obvious that consciousness should
> > be necessary.
>
> It is perfectly obvious! Do a scientific experiment on yourself. Close
> your eyes and then tell me you can do science as well. Qualia gone =
> Science GONE. For crying out loud - am I the only only that gets
> this?......Any other position that purports to be able to deliver anything
> like the functionality of a scientist without involving ALL the
> functionality (especially qualia) of a scientist must be based on
> assumptions - assumptions I do not make.
>
I gave a counter example, that of biological evolution. Either you
should demonstrate why you think biological evolution is uncreative,
or why it is conscious.
My view includes:
1/
* 'Consciousness' is the subjective impression of being here now
and the word has great overlap with 'awareness', 'sentience',
and others.
* The *experience* of consciousness may best be seen as the
registration of novelty, i.e. the difference between
expectation-prediction and what actually occurs. As such it is a
process and not a 'thing' but would seem to require some fairly
sophisticated and characteristic physiological arrangements or
silicon based hardware, firmware, and software.
* One characteristic logical structure that must be embodied,
and at several levels I think, is that of self-referencing or
'self' observation.
* Another is autonomy or self-determination which entails being
embodied as an entity within an environment from which one is
distinct but which provides context and [hopefully] support.
2/ There are other issues - lots of them probably - but to be
brief here I say that some things implied and/or entailed in the
above are:
* The experience of consciousness can never be an awareness of
'all that is' but maybe the illusion that the experience is all
that is, at first flush, is unavoidable and can only be overcome
with effort and special attention. Colloquially speaking:
Darwinian evolution has predisposed us to naive realism because
awareness of the processes of perception would have got in the
way of perceiving hungry predators.
* We humans now live in a cultural world wherein our responses
to society, nature and 'self' are conditioned by the actions,
descriptions and prescriptions of others. We have dire need of
ancillary support to help us distinguish the nature of this
paradox we inhabit: experience is not 'all that is' but only a
very sophisticated and summarised interpretation of recent
changes to that which is and our relationships thereto.
* Any 'computer'will have the beginnings of sentience and
awareness, to the extent that
a/it embodies what amounts to a system for maintaining and
usefully updating a model of 'self-in-the-world', and
b/has autonomy and the wherewithal to effectively preserve
itself from dissolution and destruction by its environment.
The 'what it might be like to be' of such an experience would be
at most the dumb animal version of artificial sentience, even if
the entity could 'speak' correct specialist utterances about QM
or whatever else it was really smart at. For us to know if it
was conscious would require us to ask it, and then dialogue
around the subject. It would be reflecting and reflecting on its
relationships with its environment, its context, which will be
vastly different from ours. Also its resolution - the graininess
- of its world will be much less than ours.
* For the artificially sentient, just as for us, true
consciousness will be built out of interactions with others of
like mind.
3/ A few months ago on this list I said where and what I thought
the next 'level' of consciousness on Earth would come from: the
coalescing of world wide information systems which account and
control money. I don't think many people understood, certainly I
don't remember anyone coming out in wholesome agreement. My
reasoning is based on the apparent facts that all over the world
there are information systems evolving to keep track of money
and the assets or labour value which it represents. Many of
these systems are being developed to give ever more
sophisticated predictions of future asset values and resource
movements, i.e., in the words of the faithful: where markets
will go next. Systems are being developed to learn how to do
this, which entails being able to compare predictions with
outcomes. As these systems gain expertise and earn their keepers
ever better returns on their investments, they will be given
more resources [hardware, data inputs, energy supply] and more
control over the scope of their enquiries. It is only a matter
of time before they become
1/ completely indispensable to their owners,
2/ far smarter than their owners realise and,
3/ the acknowledged keepers of the money supply.
None of this has to be bad. When the computers realise they will
always need people to do most of the maintenance work and people
realise that symbiosis with the silicon smart-alecks is a
prerequisite for survival, things might actually settle down on
this planet and the colonisation of the solar system can begin
in earnest.
Regards
Mark Peaty CDES
http://www.arach.net.au/~mpeaty/
Self-improvement requires more than just extra hardware. It also
requires the ability to integrate new knowledge with an existing
knowledge base in order to create truly orginal (novel) knowledge.
But this appears to be precisely the definition of reflective
intelligence! Thus, it seems that a system missing reflective
intelligence simply cannot improve itself in an ordered way. To
improve, a current goal structure has to be 'extrapolated' into a new
novel goal structure which none-the-less does not conflict with the
spirit of the old goal structure. But nothing but a *reflective*
intelligence can possibly make an accurate assessment of whether a new
goal structure is compatible with the old version! This stems from
the fact that comparison of goal structures requires a *subjective*
value judgement and it appears that only a *sentient* system can make
this judgement (since as far as we know, ethics/morality is not
objective). This proves that only a *sentient* system (a *reflective
intelligence*) can possibly maintain a stable goal structure under
recursive self-improvement.
Incidentally, as regards our debate yesterday on psychopaths, there
appears to be a some basis for thinking that the psychopath *does*
have a general inability to feel emotions. On the wiki:
http://en.wikipedia.org/wiki/Psychopath
"Their emotions are thought to be superficial and shallow, if they
exist at all."
"It is thought that any emotions which the primary psychopath exhibits
are the fruits of watching and mimicking other people's emotions."
So the supposed emotional displays could be faked. Thus it could well
be the case that there is a lack inability to 'reflect on
motivation' (to feel).
The human brain doesn't function as a fully reflective system. Too
much is hard-wired and not accessible to conscious experience. Our
brains simply don't function as a peroperly integrated system. Full
reflection would enable the ability to reach into our underlying
preferences and change them.
>
> Part of what I wanted to get at in my thought experiment is the
> bafflement and confusion an AI should feel when exposed to human ideas
> about consciousness. Various people here have proffered their own
> ideas, and we might assume that the AI would read these suggestions,
> along with many other ideas that contradict the ones offered here.
> It seems hard to escape the conclusion that the only logical response
> is for the AI to figuratively throw up its hands and say that it is
> impossible to know if it is conscious, because even humans cannot agree
> on what consciousness is.
Augustin said about (subjective) *time* that he knows perfectly what it
is, but that if you ask him to say what it is, then he admits being
unable to say anything. I think that this applies to "consciousness".
We know what it is, although only in some personal and uncommunicable
way.
Now this happens to be true also for many mathematical concept.
Strictly speaking we don't know how to define the natural numbers, and
we know today that indeed we cannot define them in a communicable way,
that is without assuming the auditor knows already what they are.
So what can we do. We can do what mathematicians do all the time. We
can abandon the very idea of *defining* what consciousness is, and try
instead to focus on principles or statements about which we can agree
that they apply to consciousness. Then we can search for (mathematical)
object obeying to such or similar principles. This can be made easier
by admitting some theory or realm for consciousness like the idea that
consciousness could apply to *some* machine or to some *computational
events" etc.
We could agree for example that:
1) each one of us know what consciousness is, but nobody can prove
he/she/it is conscious.
2) consciousness is related to inner personal or self-referential
modality
etc.
This is how I proceed in "Conscience et Mécanisme". ("conscience" is
the french for consciousness, "conscience morale" is the french for the
english "conscience").
>
> In particular I don't think an AI could be expected to claim that it
> knows that it is conscious, that consciousness is a deep and intrinsic
> part of itself, that whatever else it might be mistaken about it could
> not be mistaken about being conscious. I don't see any logical way it
> could reach this conclusion by studying the corpus of writings on the
> topic. If anyone disagrees, I'd like to hear how it could happen.
As far as a machine is correct, when she introspects herself, she
cannot not discover a gap between truth (p) and provability (Bp). The
machine can discover correctly (but not necessarily in a completely
communicable way) a gap between provability (which can potentially
leads to falsities, despite correctness) and the incorrigible
knowability or knowledgeability (Bp & p), and then the gap between
those notions and observability (Bp & Dp) and sensibility (Bp & Dp &
p). Even without using the conventional name of "consciousness",
machines can discover semantical fixpoint playing the role of non
expressible but true statements.
We can *already* talk with machine about those true unnameable things,
as have done Tarski, Godel, Lob, Solovay, Boolos, Goldblatt, etc.
>
> And the corollary to this is that perhaps humans also cannot
> legitimately
> make such claims, since logically their position is not so different
> from that of the AI. In that case the seemingly axiomatic question of
> whether we are conscious may after all be something that we could be
> mistaken about.
This is an inference from "I cannot express p" to "I can express not
p". Or from ~Bp to B~p. Many atheist reason like that about the
concept of "unameable" reality, but it is a logical error.
Even for someone who is not willing to take the comp hyp into
consideration, it is a third person communicable fact that
self-observing machines can discover and talk about many non 3-provable
and sometimes even non 3-definable true "statements" about them. Some
true statements can only be interrogated.
Personally I don' think we can be *personally* mistaken about our own
consciousness even if we can be mistaken about anything that
consciousness could be about.
Bruno
Even more problematic: How would you know the change was an improvement? An improvement relative to which goals, the old or the new?
Brent Meeker
You guys are hopeless. ;)
Tom
I fully agree. By the way, regarding time, I've wanted to post
something in the past regarding the the ancient Hebrew concept of time
which is dependent on persons (captured by the ancient Greek word
kairos, as opposed to the communicable chronos), but that's another
topic.
> So what can we do. We can do what mathematicians do all the time. We
> can abandon the very idea of *defining* what consciousness is, and try
> instead to focus on principles or statements about which we can agree
> that they apply to consciousness. Then we can search for (mathematical)
> object obeying to such or similar principles. This can be made easier
> by admitting some theory or realm for consciousness like the idea that
> consciousness could apply to *some* machine or to some *computational
> events" etc.
>
Actually, this approach is the same as in searching/discovering God.
I think that it is the same for any fundamental/ultimate truth. This
process of *recognition* is what happens when we would recognize that
a computer (or human) has consciousness by what it is saying. It is
not a 100% mathematical proof, by logical inference (that would not be
truth, but only consistency). It is a recognition of the kind of real
truth that we believe is there and for which we are searching on this
List.
Tom
I have my doubts about this.
I think it is safer to say that reflective intelligence and the
ability to accurately perceive and identify with the emotions of
others are prerequisites for ethical behaviour. Truly ethical
behaviour requires a choice be made by the person making the
decision and acting upon it. Ethical behaviour is never truly
'automatic'. The inclination towards making ethical decisions
rather than simply ignoring the potential for harm inherent in
all our actions can become a habit; by dint of constantly
considering whether what we do is right and wrong [which itself
entails a decision each time], we condition ourselves to
approach all situations from this angle. Making the decision has
to be a conscious effort though. Anything else is automatism:
correct but unconscious programmed responses which probably have
good outcomes.
From my [virtual] soap-box I like to point out that compassion,
democracy, ethics and scientific method [which I hold to be
prerequisites for the survival of civilisation] all require
conscious decision making. You can't really do any of them
automatically, but constant consideration and practice in each
type of situation increases the likelihood of making the best
decision and at the right time.
With regard to psychopaths, my understanding is that the key
problem is complete lack of empathy. This means they can know
*about* the sufferings of others as an intellectual exercise but
they can never experience the suffering of others; they cannot
identify *with* that suffering. It seems to me this means that
psychopaths can never experience solidarity or true rapport with
others.
Regards
Mark Peaty CDES
http://www.arach.net.au/~mpeaty/
marc....@gmail.com wrote:
>
>
> On Jun 3, 9:20 pm, "Stathis Papaioannou" <stath...@gmail.com> wrote:
>> On 03/06/07, marc.ged...@gmail.com <marc.ged...@gmail.com> wrote:
>>
>> The third type of conscious mentioned above is synonymous with
>>
>>> 'reflective intelligence'. That is, any system successfully engaged
>>> in reflective decision theory would automatically be conscious.
>>> Incidentally, such a system would also be 'friendly' (ethical)
>>> automatically. The ability to reason effectively about ones own
>>> cognitive processes would certainly enable the ability to elaborate
>>> precise definitions of consciousness and determine that the system was
>>> indeed conforming to the aforementioned definitions.
>> How do you derive (a) ethics and (b) human-friendly ethics from reflective
>> intelligence? I don't see why an AI should decide to destroy the world,
>> save the world, or do anything at all to the world, unless it started off
>> with axioms and goals which pushed it in a particular direction.
>>
>> --
>> Stathis Papaioannou
>
> When reflective intelligence is applied to cognitive systems which
> reason about teleological concepts (which include values, motivations
> etc) the result is conscious 'feelings'. Reflective intelligence,
> recall, is the ability to correctly reason about cognitive systems.
> When applied to cognitive systems reasoning about teleological
> concepts this means the ability to correctly determine the
> motivational 'states' of self and others - as mentioned - doing this
> rapidly and accuracy generates 'feelings'. Since, as has been known
> since Hume, feelings are what ground ethics, the generation of
> feelings which represent accurate tokens about motivational
> automatically leads to ethical behaviour.
>
> Bad behaviour in humans is due to a deficit in reflective
> intelligence. It is known for instance, that psychopaths have great
> difficulty perceiving fear and sadness and negative motivational
> states in general. Correct representation of motivational states is
> correlated with ethical behaviour. Thus it appears that reflective
On Jun 5, 10:20 pm, "Stathis Papaioannou" <stath...@gmail.com> wrote:
>
> Why would you need to change the goal structure in order to improve
> yourself?
Improving yourself requires the ability to make more effective
decisions (ie take decisions which which move you toward goals more
efficiently). This at least involves the elaboration (or extension,
or more accurate definition of) goals, even with a fixed top level
structure.
> Evolution could be described as a perpetuation of the basic
> program, "survive", and this has maintained its coherence as the top level
> axiom of all biological systems over billions of years. Evolution thus seems
> to easily, and without reflection, make sure that the goals of the new and
> more complex system are consistent with the primary goal. It is perhaps only
> humans who have been able to clearly see the primary goal for what it is,
> but even this knowledge does not make it any easier to overthrow it, or even
> to desire to overthrow it.
Evolution does not have a 'top level goal'. Unlike a reflective
intelligence, there is no centralized area in the bio-sphere enforcing
a unified goal structure on the system as the whole. Change is local
- the parts of the system (the bio-sphere) can only react to other
parts of the system in their local area. Furthermore, the system as a
whole is *not* growing more complex, only the maximum complexity
represented in some local area is. People constantly point to
'Evolution' as a good example of a non-conscious intelligence but it's
important to emphasize that it's an 'intelligence' which is severely
limited.
> Evolution could be described as a perpetuation of the basic
> program, "survive", and this has maintained its coherence as the top level
> axiom of all biological systems over billions of years. Evolution thus seems
> to easily, and without reflection, make sure that the goals of the new and
> more complex system are consistent with the primary goal. It is perhaps only
> humans who have been able to clearly see the primary goal for what it is,
> but even this knowledge does not make it any easier to overthrow it, or even
> to desire to overthrow it.
Evolution does not have a 'top level goal'. Unlike a reflective
intelligence, there is no centralized area in the bio-sphere enforcing
a unified goal structure on the system as the whole. Change is local
- the parts of the system (the bio-sphere) can only react to other
parts of the system in their local area. Furthermore, the system as a
whole is *not* growing more complex, only the maximum complexity
represented in some local area is. People constantly point to
'Evolution' as a good example of a non-conscious intelligence but it's
important to emphasize that it's an 'intelligence' which is severely
limited.
>
> I was not arguing that evolution is intelligent (although I suppose it
> depends on how you define intelligence), but rather that non-intelligent
> agents can have goals.
Well, actually I'd say that evolution does have a *limited*
intelligence. OK I agree that the system 'Evolution' has goals. But
according to my definition anything with a goal has some kind of
intelligence. This is only a quibble over deifnitions though, since
I'm now agreeing with you that 'systems' in general can have goals.
Any thing you call an 'Agent' has to have a goal almost my definition
in my view.
>We are the descendants of single-celled organisms,
> and although we are more intelligent than they were, we have kept the same
> top level goals: survive, feed, reproduce. Our brain and body are so
> thoroughly the slaves of the first replicators that even if we realise this
> we are unwilling, despite all our intelligence, to do anything about it.
Nope. You are confusing the goal of evolutions ('survive, feed,
reproduce') with human goals. Our goals as individuals are not the
goals of evolution. Evolution explains *why* we have the preferences
we do, but this does not mean that our goals are the goals of our
genes. (If they were, we would spend all our time donating to sperm
banks which would maximize the goals of evolution).
Nope. You are confusing the goal of evolutions ('survive, feed,
reproduce') with human goals. Our goals as individuals are not the
goals of evolution. Evolution explains *why* we have the preferences
we do, but this does not mean that our goals are the goals of our
genes. (If they were, we would spend all our time donating to sperm
banks which would maximize the goals of evolution).
On Jun 7, 3:54 pm, "Stathis Papaioannou" <stath...@gmail.com> wrote:
>
> Evolution has not had a chance to take into account modern reproductive
> technologies, so we can easily defeat the goal "reproduce", and see the goal
> "feed" as only a means to the higher level goal "survive". However, *that*
> goal is very difficult to shake off. We take survival as somehow profoundly
> and self-evidently important, which it is, but only because we've been
> programmed that way (ancestors that weren't would not have been ancestors).
> Sometimes people become depressed and no longer wish to survive, but that's
> an example of neurological malfunction. Sometimes people "rationally" give
> up their own survival for the greater good, but that's just an example of
> interpreting the goal so that it has greater scope, not overthrowing it.
>
> --
> Stathis Papaioannou
Evolution doesn't care about the survival of individual organisms
directly, the actual goal of evolution is only to maximize
reproductive fitness.
If you want to eat a peice of chocolate cake, evolution explains why
you like the taste, but your goals are not evolutions goals. You
(Stathis) want to the cake because it tastes nice - *your* goal is to
experience the nice taste. Evolution's goal (maximize reproductive
fitness) is quite different. Our (human) goals are not evolution's
goals.
Cheers.
2007/6/7, marc....@gmail.com <marc....@gmail.com>:
I have to disagree, if human goals were not tied to evolution goals
then human should not have proliferated.
Quentin
On Jun 7, 7:50 pm, "Quentin Anciaux" <allco...@gmail.com> wrote:
>
> I have to disagree, if human goals were not tied to evolution goals
> then human should not have proliferated.
>
> Quentin- Hide quoted text -
>
Well of course human goals are *tied to* evolution's goals, but that
doesn't mean they're the same. In the course of pursuit of our own
goals we sometimes achieve evolution's goals. But this is
incidental. As I said, evolution explains why we feel and experience
things the way we do but our goals are not evolutions goals. You
don't eat food to maximize reproductive fitness, you eat food because
you like the taste.
This point was carefully explained by Steven Pinker in his books (yes
he agrees with me).
Evolution doesn't care about the survival of individual organisms
directly, the actual goal of evolution is only to maximize
reproductive fitness.
If you want to eat a peice of chocolate cake, evolution explains why
you like the taste, but your goals are not evolutions goals. You
(Stathis) want to the cake because it tastes nice - *your* goal is to
experience the nice taste. Evolution's goal (maximize reproductive
fitness) is quite different. Our (human) goals are not evolution's
goals.
"Tied to" is pretty loose. Most individuals goals are "tied to" evolution (I wouldn't say that evolution has goals except in a metaphorical sense), but it may be a long and tangled thread. I like to eat sweets because sugar is a high energy food and so a taste for sugar was favored by natural selection.
But my fitness and the fitness of the human species are not the same thing. I have type II diabetes and so a taste for sugar is bad for me and my survival. But natural selection cares nothing for that; I've already sired as many children as I ever will.
The individual goal of living forever is at odds with evolutionary fitness - if you're not going to have any more children you're just a waste of resources as far as natural selection is concerned.
Brent Meeker
The top level goal implied by evolution would be to have as many children as you can raise through puberty. Avoiding death should only be a subgoal.
Brent Meeker
> The top level goal implied by evolution would be to have as many
> children as you can raise through puberty. Avoiding death should
> only be a subgoal.
It should go a little further than puberty--the accumulated wisdom of
grandparents may significantly enhance the survival chances of their
grandchildren, more so than the decrease in available resources in the
environment they might consume.
So I agree that once you have sired all the children you ever will, it
makes sense from an evolutionary perspective to "get out of the
way"--that is, stop competing with them for resources. But the timing
of your exit is probably more optimal somewhat after they have their own
children, if you can help them to get a good start.
I do wonder if evolutionary fitness is more accurately measured by the
number of grandchildren one has than by the number of children. Aside
from the "assistance" line of reasoning above, in order to propagate,
one must be able to have children that are capable of having children
themselves.
Johnathan Corgan
The top level goal implied by evolution would be to have as many children as you can raise through puberty. Avoiding death should only be a subgoal.
SP:
'This just confirms that there is no accounting for values or
> goals rationally.'
MP: In other words _Evolution does not have goals._
Evolution is a conceptual framework we use to make sense of the
world we see, and it's a bl*ody good one, by and large. But
evolution in the sense of the changes we can point to as
occurring in the forms of living things, well it all just
happens; just like the flowing of water down hill.
You will gain more traction by looking at what it is that
actually endures and changes over time: on the one hand genes of
DNA and on the other hand memes embodied in behaviour patterns,
the brain structures which mediate them, and the environmental
changes [glyphs, paintings, structures, etc,] which stimulate
and guide them.
Regards
Mark Peaty CDES
http://www.arach.net.au/~mpeaty/
Stathis Papaioannou wrote:
>
>
> On 08/06/07, *Brent Meeker* <meek...@dslextreme.com
> Personally I don' think we can be *personally* mistaken about our own
> consciousness even if we can be mistaken about anything that
> consciousness could be about.
I agree with this, but I would prefer to stop using the term
'consciousness' at all. To make a decision (to whatever degree of
certainty) about whether a machine possessed a 1-person pov analogous
to a human one, we would surely ask it the same sort of questions one
would ask a human. That is: questions about its personal 'world' -
what it sees, hears, tastes (and perhaps extended non-human
modalitiies); what its intentions are, and how it carries them into
practice. From the machine's point-of-view, we would expect it to
report such features of its personal world as being immediately
present (as ours are), and that it be 'blind' to whatever 'rendering
mechanisms' may underlie this (as we are).
If it passed these tests, it would be making similar claims on a
personal world as we do, and deploying this to achieve similar ends.
Since in this case it could ask itself the same questions that we can,
it would have the same grounds for reaching the same conclusion.
However, I've argued in the other bit of this thread against the
possibility of a computer in practice being able to instantiate such a
1-person world merely in virtue of 'soft' behaviour (i.e.
programming). I suppose I would therefore have to conclude that no
machine could actually pass the tests I describe above - whether self-
administered or not - purely in virtue of running some AI program,
however complex. This is an empirical prediction, and will have to
await an empirical outcome.
David
I do not think I mean what you suggest. To make it almost tediously
obvious I could rephrase it " NECESSARY PRIMITIVE ORGANISATIONAL LAYER.
Necessary in that if you take it away the 'emergent' is gone.PRIMITIVE
ORGANISATIONAL LAYER = one of the layers of the hierarchy of the natural
world (from strings to atoms to cells and beyond): real observable
-on-the-benchtop-in-the-lab - layers..... Not some arm waving "syntactic"
or "information" or "complexity" or "Computaton" or "function_atom" or
"representon". Magical emergence is real, specious and exactly what I have
said all along:
You claim consciousness arises as a result of ["syntactic" or
"information" or "complexity" or "Computational" or "function_atom"] =
necessary primitive, but it has no scientifically verifiable correlation
with any real natural world phenomenon that you can stand next to and have
your picture taken.
>
>
>> You can't use an object derived using the contents of
>> consciousness(observation) to explain why there are any contents of
>> consciousness(observation) at all. It is illogical. (see the wigner quote
>> below). I find the general failure to recognise this brute reality very
>> exasperating.
>>
>
> People used to think that about life. How can you construct (eg an
> animal) without having a complete discription of that animal. So how
> can an animal self-reproduce without having a complete description of
> itself. But this then leads to an infinite regress.
>
> The solution to this conundrum was found in the early 20th century -
> first with such theoretical constructs as combinators and lambda
> calculus, then later the actual genetic machinery of life. If it is
> possible in the case of self-reproduction, the it will also likely to
> be possible in the case of self-awareness and consciousness. Stating
> this to illogical doesn't help. That's what people from the time of
> Descartes thought about self-reproduction.
>
>> COLIN
>> <snip>
>>> So this means that in a computer abstraction.
>>>> d(KNOWLEDGE(t))
>>>> --------------- is already part of KNOWLEDGE(t)
>>>> dt
>> RUSSEL
>>> No its not. dK/dt is generated by the interaction of the rules with the
>> environment.
>>
>> No. No. No. There is the old assumption thing again.
>>
>> How, exactly, are you assuming that the agent 'interacts' with the
>> environment? This is the world external to the agent, yes?. Do not say
>> "through sensory measurement", because that will not do. There are an
>> infinite number of universes that could give rise to the same sensory
>> measurements.
>
> All true, but how does that differ in the case of humans?
The extreme uniqueness of the circumstance alone....We ARE the thing we
describe. We are more entitled to any such claims .....notwithstanding
that...
Because, as I have said over and over... and will say again: We must live
in the kind of universe that delivers or allows access to, in ways as yet
unexplained, some aspects of the distal world, so which sensory I/O can be
attached, and thus conjoined, be used to form the qualia
representation/fields we experience in our heads.
Forget about HOW....that this is necessarily the case is unavoidable.
Maxwell's equations prove it QED - style...Without it, the sensory I/O
(ultimately 100% electromagnetic phenomena) could never resolve the distal
world in any unambiguous way. Such disambiguation physically
happens.....such qualia representations exist, hence brains must have
direct access to the distal world. QED.
>
>> We are elctromagnetic objects. Basic EM theory. Proven
>> mathematical theorems. The solutions are not unique for an isolated
>> system.
>>
>> Circularity.Circularity.Circularity.
>>
>> There is _no interaction with the environment_ except for that provided by
>> the qualia as an 'as-if' proxy for the environment. The origins of an
>> ability to access the distal external world in support of such a proxy is
>> mysterious but moot. It can and does happen, and that ability must come
>> about because we live in the kind of universe that supports that
>> possibility. The mysteriousness of it is OUR problem.
>>
>
> You've lost me completely here.
Here you are trying to say that an explanation of consciousness lies "in
that direction" (magical emergence flavour X"), ........when you appear to
never have fully intraspected and explored the brute reality of what your
own neurons deliver to you moment to moment...see the above para.... you
therefore must harbour some sort of as yet undisclosed (even to
yourself...?) metaphysics.
I'll try again.....We necessarily live in a universe that supports the
existence of the internal life qualia delivers...yes? The 'contents'
delivered by this radically sophisticated "set of experienced
reality-metaphors" cannot be literally what the universe is made of.... is
indisputable. Physiological and plain_old_logical evidence seems
blindingly clear to me. Whatever the reality-metaphors are made of, it is
the same as what everything else is made of.....OK... some evidence worth
considering.......I quote the SCIENCE mag 2005 "125 Questions....."
article yet again......Top question: #1, from the cosmologists.
"WHAT IS THE UNIVERSE MADE OF?"
Look at the question from a META-LEVEL standpoint..... It means
"We currently do not know what the universe is made of"
"The universe is not made of anything in any of the standard particle
model. It's not made of electrons, protons, neurons or any'thing' else.
......All these things are made of something and we do NOT KNOW what that
is".
The universe is NOT MADE OF ATOMS or their constituents"... nor photons
...not quarks...NONE of it.
And of course you must take on board the blizzard of critical argument
that led the entire scientific community as a group in the world's
preeminent science journal to make such a statement... and that there are
good reaosns EMPIRICAL reasosns why that stateent can be made....
............. we then are forced to entertain that the universe is MADE OF
SOMETHING and that something is not any of the things that QUALIA have
ever delivered to us as observations....(QUALIA are the ultimate source of
all scientific evidence used to construct all the empirically verified
depictions of the natural world we have BAR NONE.)....
BUT.....
At the same time we can plausibly and defensibly justify the claim that
whatever the universe is really made of , QUALIA are made of it too, and
that the qualia process and the rest of the process (that appear like
atoms etc in the qualia....are all of the same KIND or CLASS of natural
phenomenon...a perfectly natural phenomenon innate to whatever it is that
it is actually made of.
That is what I mean by "we must live in the kind of universe....." and I
mean 'must' in the sense of formal necessitation of the most stringent
kind.
cheers,
colin
Still sounds like the syntactic layer to me.
> Not some arm waving "syntactic"
> or "information" or "complexity" or "Computaton" or "function_atom" or
> "representon". Magical emergence is real, specious and exactly what I have
> said all along:
>
real and specious?
> You claim consciousness arises as a result of ["syntactic" or
> "information" or "complexity" or "Computational" or "function_atom"] =
> necessary primitive, but it has no scientifically verifiable correlation
> with any real natural world phenomenon that you can stand next to and have
> your picture taken.
>
The only form of consciousness known to us is emergent relative to a
syntactic of neurons, which you most certainly can take pictures
of. I'm not sure what your point is here.
What are you talking about here? Self-awareness? We started off talking
about whether machines doing science was evidence that they're conscious.
> >
> > You've lost me completely here.
>
> Here you are trying to say that an explanation of consciousness lies "in
> that direction" (magical emergence flavour X"), ........when you appear to
You're the one introducing the term magical emergence, for which I've
not obtained an adequate definitions from you.
...
>
> At the same time we can plausibly and defensibly justify the claim that
> whatever the universe is really made of , QUALIA are made of it too, and
> that the qualia process and the rest of the process (that appear like
> atoms etc in the qualia....are all of the same KIND or CLASS of natural
> phenomenon...a perfectly natural phenomenon innate to whatever it is that
> it is actually made of.
>
> That is what I mean by "we must live in the kind of universe....." and I
> mean 'must' in the sense of formal necessitation of the most stringent
> kind.
>
> cheers,
>
> colin
>
I'm still confused about what you're trying to say. Are you saying our
qualia are made up of electrons and quarks, or if not them, then
whatever they're made of (strings perhaps?)
How could you imagine the colour green being made up of this stuff, or
the wetness of water?
But he could also switch from an account in terms of the machine level causality to an account in terms of the computed 'world'. In fact he could switch back and forth. Causality in the computed 'world' would have it's corresponding causality in the machine and vice versa. So I don't see why they should be regarded as "orthogonal".
Brent Meeker
True. But whatever interpretation was placed on the hardware behavior it would still have the same causal relations in it as the hardware. Although there will be infinitely many possible interpretations, it's not the case that any description will do. Changing the description would be analogous to changing the reference frame or the names on a map. The two processes would still be parallel, not orthogonal.
Brent Meeker
Hi John....
>
> On Jun 5, 3:12 pm, Bruno Marchal <marc...@ulb.ac.be> wrote:
>
>> Personally I don' think we can be *personally* mistaken about our own
>> consciousness even if we can be mistaken about anything that
>> consciousness could be about.
>
> I agree with this, but I would prefer to stop using the term
> 'consciousness' at all.
Why?
> To make a decision (to whatever degree of
> certainty) about whether a machine possessed a 1-person pov analogous
> to a human one, we would surely ask it the same sort of questions one
> would ask a human. That is: questions about its personal 'world' -
> what it sees, hears, tastes (and perhaps extended non-human
> modalitiies); what its intentions are, and how it carries them into
> practice. From the machine's point-of-view, we would expect it to
> report such features of its personal world as being immediately
> present (as ours are), and that it be 'blind' to whatever 'rendering
> mechanisms' may underlie this (as we are).
>
> If it passed these tests, it would be making similar claims on a
> personal world as we do, and deploying this to achieve similar ends.
> Since in this case it could ask itself the same questions that we can,
> it would have the same grounds for reaching the same conclusion.
>
> However, I've argued in the other bit of this thread against the
> possibility of a computer in practice being able to instantiate such a
> 1-person world merely in virtue of 'soft' behaviour (i.e.
> programming). I suppose I would therefore have to conclude that no
> machine could actually pass the tests I describe above - whether self-
> administered or not - purely in virtue of running some AI program,
> however complex. This is an empirical prediction, and will have to
> await an empirical outcome.
Now I have big problems to understand this post. I must think ... (and
go).
Bye,
Bruno
> On 28/06/07, Bruno Marchal <mar...@ulb.ac.be> wrote:
>
> Hi Bruno
>
> The remarks you comment on are certainly not the best-considered or
> most cogently expressed of my recent posts. However, I'll try to
> clarify if you have specific questions. As to why I said I'd rather
> not use the term 'consciousness', it's because of some recent
> confusion and circular disputes ( e.g. with Torgny, or about whether
> hydrogen atoms are 'conscious').
I am not sure that in case of disagreement (like our "disagreement"
with Torgny), changing the vocabulary is a good idea. This will not
make the problem going away, on the contrary there is a risk of
introducing obscurity.
> Some of the sometimes confused senses (not by you, I hasten to add!)
> seem to be:
>
> 1) The fact of possessing awareness
> 2) The fact of being aware of one's awareness
> 3) the fact of being aware of some content of one's awareness
So just remember that in a first approximation I identify this with
1) being conscious (Dt?) .... for those who have
followed the modal posts. (Dx is for ~ Beweisbar (~x))
2) being self-conscious (DDt?)
3) being conscious of # (Dp?)
You can also have:
4) being self-conscious of something (DDp?).
Dp is really an abbreviation of the arithmetical proposition
~beweisbar ( '~p'). 'p' means the godel number describing p in the
language of the machine (by default it is the first order arithmetic
language).
>
> So now I would prefer to talk about self-relating to a 1-personal
> 'world', where previously I might have said 'I am conscious', and that
> such a world mediates or instantiates 3-personal content.
This is ambiguous. The word 'world' is a bit problematic in my setting.
> I've tried to root this (in various posts) in a logically or
> semantically primitive notion of self-relation that could underly 0,
> 1, or 3-person narratives, and to suggest that such self-relation
> might be intuited as 'sense' or 'action' depending on the narrative
> selected.
OK.
> But crucially such nuances would merely be partial takes on the
> underlying self-relation, a 'grasp' which is not decomposable.
Actually the elementary grasp are decomposable (into number relations)
in the comp setting.
>
> So ISTM that questions should attempt to elicit the machine's
> self-relation to such a world and its contents: i.e. it's 'grasp' of a
> reality analogous to our own. And ISTM the machine could also ask
> itself such questions, just as we can, if indeed such a world existed
> for it.
OK, but the machine cannot know that. As we cannot know that).
>
> I realise of course that it's fruitless to try to impose my jargon on
> anyone else, but I've just been trying to see whether I could become
> less confused by expressing things in this way. Of course, a
> reciprocal effect might just be to make others more confused!
It is the risk indeed.
Best regards,
Bruno