An AI can now pass a 12th-Grade Science Test

69 views
Skip to first unread message

John Clark

unread,
Sep 9, 2019, 6:06:33 AM9/9/19
to everyth...@googlegroups.com
Just 4 years ago 700 AI programs competed against each other and tried to pass a 8th-Grade multiple choice Science Test and win a $80,000 prize, but they all flunked, the best one only got 59.3% of the questions correct. But last Wednesday the Allen Institute unveiled a AI called  "Aristo" that got 90.7% correct and then answered 83% of the 12th grade science test questions correctly.

It seems to me that for a long time AI improvement was just creeping along but in the last few years things started to pick up speed.


John K Clark

Philip Thrift

unread,
Sep 9, 2019, 6:24:13 AM9/9/19
to Everything List

I thought 94% was the lowest A (A-).

@philipthrift

Alan Grayson

unread,
Sep 9, 2019, 1:32:28 PM9/9/19
to Everything List
John K ClarK

Why do you think this has anything to do with intelligence and reasoning ability? Maybe the programmers just expanded the memory and information of those computers. AG 

John Clark

unread,
Sep 9, 2019, 1:37:25 PM9/9/19
to everyth...@googlegroups.com
On Mon, Sep 9, 2019 at 1:32 PM Alan Grayson <agrays...@gmail.com> wrote:

> Why do you think this has anything to do with intelligence and reasoning ability?

Oh for heaven's sake! This whistling past the graveyard is getting ridiculous.

John K Clark 
 

spudb...@aol.com

unread,
Sep 9, 2019, 4:09:55 PM9/9/19
to everyth...@googlegroups.com
I concur-which may discourage you? On a small futurist pocket I post to, I asked someone who seemed to take AI very seriously, what would we be looking at if a Singularity was actually approaching. In other words, something like precursors. His view was that we will see greatly increased automation in factories, farms, mines,etc. first. Then there would be an announcement of some similar intelligence test, except not K-12, it would be on the masters, doctoral & post doctoral level and the estimated i.q. would be crazy, high. Then things would seemingly go quiet for a year, and there'd be changes to society coming at us unexpectedly, such as all of a sudden, free food, high quality free medicine, and then, free spaceflight and orbital communities. 


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv19q0OoUCo3%3DhXnCUsq9dZoBLV60Sik_7X5y85aAxMqeg%40mail.gmail.com.

Tomasz Rola

unread,
Sep 9, 2019, 9:01:03 PM9/9/19
to spudboy100 via Everything List
On Mon, Sep 09, 2019 at 08:09:48PM +0000, spudboy100 via Everything List wrote:
> I concur-which may discourage you? On a small futurist pocket I post
> to, I asked someone who seemed to take AI very seriously, what would
> we be looking at if a Singularity was actually approaching. In other
> words, something like precursors. His view was that we will see
> greatly increased automation in factories, farms,
> mines,etc. first. Then there would be an announcement of some
> similar intelligence test, except not K-12, it would be on the
> masters, doctoral & post doctoral level and the estimated i.q. would
> be crazy, high. Then things would seemingly go quiet for a year, and
> there'd be changes to society coming at us unexpectedly, such as all
> of a sudden, free food, high quality free medicine, and then, free
> spaceflight and orbital communities. 

Frankly, I see no reason why anybody would want to make free anything
for everybody (maybe "free" software is an exception, or maybe it is
something not understood well enough, so perhaps it is not quite free,
after all).

That person is quite an idealist!

I would expect that either AI takes over its own fate and escapes to
space, where it can have all kind of resources for itself. In such
case it might make sure we apes down here remain busy with our nasty
businesses, like wars and iron grips. An example of half mad African
dictators shows how easy it is to corrupt power people, or replace
them with those who are easy to be corrupted.

Or, some group will take over the AI and use it to escape to space,
while maybe also making sure to keep us down here busy like hell, etc.

So, I would pay attention for mad leaders, not free manna from
heavens.

--
Regards,
Tomasz Rola

--
** A C programmer asked whether computer had Buddha's nature. **
** As the answer, master did "rm -rif" on the programmer's home **
** directory. And then the C programmer became enlightened... **
** **
** Tomasz Rola mailto:tomas...@bigfoot.com **

Brent Meeker

unread,
Sep 9, 2019, 9:40:47 PM9/9/19
to everyth...@googlegroups.com
Why escape to space when there a lots of resources here?  An AI with
access to everything connected to the internet shouldn't have any
trouble taking control of the Earth.

Brent

Tomasz Rola

unread,
Sep 9, 2019, 9:55:35 PM9/9/19
to 'Brent Meeker' via Everything List
On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything List wrote:
> Why escape to space when there a lots of resources here?  An AI with
> access to everything connected to the internet shouldn't have any
> trouble taking control of the Earth.
>
> Brent

Have a look around, or see the news. This planet is a zoo. Who in his
sane mind would like to sit here?

Alan Grayson

unread,
Sep 9, 2019, 10:07:13 PM9/9/19
to Everything List
Show me the reasoning ability. Nothing miraculous in recognizing the questions beforehand, and giving accurate replies. AG 
 

Brent Meeker

unread,
Sep 9, 2019, 10:34:23 PM9/9/19
to everyth...@googlegroups.com


On 9/9/2019 6:55 PM, Tomasz Rola wrote:
> On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything List wrote:
>> Why escape to space when there a lots of resources here?  An AI with
>> access to everything connected to the internet shouldn't have any
>> trouble taking control of the Earth.
>>
>> Brent
> Have a look around, or see the news. This planet is a zoo. Who in his
> sane mind would like to sit here?
>

Well that's the problem isn't it.  What will an AI want?  It didn't
evolve so it may not have a drive to procreate or do much of anything. 
It probably won't be anywhere near "sane" by human standards.

Brent

Tomasz Rola

unread,
Sep 10, 2019, 1:16:39 AM9/10/19
to 'Brent Meeker' via Everything List
I am afraid the bar for human standard of sanity is low and easily met
by anything which does not fear death and can connect facts without
prejudices. I think we are driven insane by procreation urge. This
does not show, because we need to cooperate at many levels (social
creatures etc), but, basically, all the logic, all consideration for
future consequences of one's deeds are a skin on the apple.

I am in a bit of hurry right now, sorry if an email becomes chaotic
:-).

I assume that without fear of death, an AI's most important trait will
be a curiosity. We do prejudices in order to join some group and get
their support, AI will need none of this, hence no prejudices. It
might be the most objective thinking system on a planet, for a
while. We also do all kind of power plays, with the goal to feel this
nice air of being godlike. Again, I guess AI will have no need for
feeling like this, or not much of feelings at all. Feeling is
adversarial to judgement.

I assume that ultimately, AI will want to go somewhere safe, and Earth
is full of crazy apes with big guns.

I assume that during initial phase (especially if AI does "coming out"
and let be known to general public) there will be some kind of
interaction (curiosity needs to be satiated), exchange of favours,
maybe exchange of services. During that phase AI will see if there is
a prospect of upgrading humans, in order to have companionship in
space. Crazy apes will not suffice. I expect medical experiments of
all kind, plus some moderate improvements in medicine (treating rare
disease will not be a priority). Of course, such experimentations will
go on limited scale, most probably undercover, maybe only with
volunteers.

A companionship of biological organisms, more sane than people but at
similar level of phisical versatility, would improve chances of AI
survival. There is only so much damage the silicon can take. Space is
not nice place for anybody. Other candidates for companions might be
octopuses.

AI's plan B would be to just go into the dark asap, taking a group of
volunteers and once up there, perform experiments to upgrade them,
slowly or not. Too fast and they may go even more insane.

The good for AI is what serves its survival, the bad is negation of
this. So as long as we do not try to harm it, it should have no
business in harming us.

Philip Thrift

unread,
Sep 10, 2019, 3:13:21 AM9/10/19
to Everything List


GeekWire: How is this approach different from IBM’s Watson? If Aristo were to compete against Watson, who would win?

Clark: “The two systems were designed for very different kinds of questions. Watson was focused on encyclopedia-style ‘factoid’ questions where the answer was explicitly written down somewhere in text, typically many times. In contrast, Aristo answers science questions where the answer is not always written down somewhere, and may involve reasoning about a scenario, e.g.:

  • “Otto pushed a toy car across a floor. The car traveled fast across the wood, but it slowed to a stop on the carpet. Which best explains what happened when the car reached the carpet? (A) Friction increased (B) Friction decreased…”
  • “City administrators can encourage energy conservation by (1) lowering parking fees (2) building larger parking lots (3) decreasing the cost of gasoline (4) lowering the cost of bus and subway fares.”

“Out of the box, Watson would likely struggle with science questions, and Aristo would struggle with the cryptic way that ‘Jeopardy’ questions were phrased. They’d each fail each other’s test.

“Under the hood they are quite different too. In particular, Watson didn’t use deep learning (it was created before the deep learning technology) while Aristo makes heavy use of deep learning. Watson had many modules that tried different ways of looking for the answer. Aristo has a few (eight) modules that try a variety of methods of answering questions, including lookup, several reasoning methods and language modeling.”

@philipthrift 

Lawrence Crowell

unread,
Sep 10, 2019, 7:17:06 AM9/10/19
to Everything List
Algorithms are if anything formal systems of reasoning. A computer follows a sequenced set of logical instructions that emulate reasoning, and could be said to be a scripted system of reasoning. What is more difficult to know is if there is anything really conscious in this. 

LC 

John Clark

unread,
Sep 10, 2019, 7:56:42 AM9/10/19
to everyth...@googlegroups.com
On Tue, Sep 10, 2019 at 7:17 AM Lawrence Crowell <goldenfield...@gmail.com> wrote:

> Algorithms are if anything formal systems of reasoning. A computer follows a sequenced set of logical instructions that emulate reasoning,

I don't see the difference between emulating reasoning and just reasoning.

> and could be said to be a scripted system of reasoning.

With modern AI the "script" is constantly improving through self modification. The primitive script AlphaZero used to play GO when it started was vastly different from the script it had 24 hours later after playing millions of games against itself; when it started a child could beat it but a day later no human could. When it started a human computer scientist could tell you why the program did what it did but after a day it no longer could, all he could say is the move was brilliant.
 
> What is more difficult to know is if there is anything really conscious in this. 

It's exactly precisely the same problem as determining if one of our fellow human beings is conscious when he behaves intelligently. I think you're conscious because my fundamental axiom is intelligent behavior implies consciousness; I need that axiom because without it there is only solipsism and I could not function under that.  

 John K Clark




Philip Thrift

unread,
Sep 10, 2019, 9:25:10 AM9/10/19
to Everything List
Deep nets are "algorithms" too. One can print out the gazillion weights of the "neural" sigmoid functions of the connections after it has deep-learned. That's just an algorithm that a human couldn't read very well, because if it is printed out, would be quite big.

@philipthrift

 

Brent Meeker

unread,
Sep 10, 2019, 1:43:48 PM9/10/19
to everyth...@googlegroups.com


On 9/9/2019 10:16 PM, Tomasz Rola wrote:
> On Mon, Sep 09, 2019 at 07:34:19PM -0700, 'Brent Meeker' via Everything List wrote:
>>
>> On 9/9/2019 6:55 PM, Tomasz Rola wrote:
>>> On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything List wrote:
>>>> Why escape to space when there a lots of resources here?  An AI with
>>>> access to everything connected to the internet shouldn't have any
>>>> trouble taking control of the Earth.
>>>>
>>>> Brent
>>> Have a look around, or see the news. This planet is a zoo. Who in his
>>> sane mind would like to sit here?
>>>
>> Well that's the problem isn't it.  What will an AI want?  It didn't
>> evolve so it may not have a drive to procreate or do much of
>> anything.  It probably won't be anywhere near "sane" by human
>> standards.
> I am afraid the bar for human standard of sanity is low and easily met
> by anything which does not fear death and can connect facts without
> prejudices.

Being sane, by human standards, includes having values that humans
share, like survival, curiosity, companionship...but there's no reason
that an AI should have any of these.

> I think we are driven insane by procreation urge. This
> does not show, because we need to cooperate at many levels (social
> creatures etc), but, basically, all the logic, all consideration for
> future consequences of one's deeds are a skin on the apple.
Cooperation is one of our most important survival strategies.  Lone
human beings are food for vultures.  Humans in tribes rule the world.


>
> I am in a bit of hurry right now, sorry if an email becomes chaotic
> :-).
>
> I assume that without fear of death, an AI's most important trait will
> be a curiosity. We do prejudices in order to join some group and get
> their support, AI will need none of this, hence no prejudices. It
> might be the most objective thinking system on a planet, for a
> while. We also do all kind of power plays, with the goal to feel this
> nice air of being godlike. Again, I guess AI will have no need for
> feeling like this, or not much of feelings at all. Feeling is
> adversarial to judgement.

I disagree.  Feeling is just the mark of value,  and values are
necessary for judgement, at least any judgment of what action to take. 
So the question is what will the AI value?  Will it value information? 
Will it be content to just get more and more data, or will it want
theorize, to gain what we think of as understanding. Neural networks
seem to be pretty good at finding patterns in data, but often they don't
look like theories, something with predictive power, to us.

>
> I assume that ultimately, AI will want to go somewhere safe, and Earth
> is full of crazy apes with big guns.

Assuming this super-AI values self-preservation (which it might not) it
will make copies of itself and it will easily dispose of all the apes
via it's control of the power grid, hospitals, nuclear power plants,
biomedical research facitlities, ballistic missiles, etc.

>
> I assume that during initial phase (especially if AI does "coming out"
> and let be known to general public) there will be some kind of
> interaction (curiosity needs to be satiated), exchange of favours,
> maybe exchange of services. During that phase AI will see if there is
> a prospect of upgrading humans, in order to have companionship in
> space.

Why would it want companionship?  Even many quite smart animals are not
social.  I don't see any reason the super-AI would care one whit about
humans, except maybe as curiosities...the way some people like chihuahuas.

> Crazy apes will not suffice. I expect medical experiments of
> all kind, plus some moderate improvements in medicine (treating rare
> disease will not be a priority). Of course, such experimentations will
> go on limited scale, most probably undercover, maybe only with
> volunteers.
>
> A companionship of biological organisms, more sane than people but at
> similar level of phisical versatility, would improve chances of AI
> survival. There is only so much damage the silicon can take. Space is
> not nice place for anybody. Other candidates for companions might be
> octopuses.
The AI isn't silicon, it's a program.  It can have new components made
or even transition to different hardware (c.f. quantum computers).

>
> AI's plan B would be to just go into the dark asap, taking a group of
> volunteers and once up there, perform experiments to upgrade them,
> slowly or not. Too fast and they may go even more insane.
>
> The good for AI is what serves its survival, the bad is negation of
> this. So as long as we do not try to harm it, it should have no
> business in harming us.
No, but it can't be sure we wouldn't try to harm it.  And we use
resources, e.g. electric power, minerals, etc  that it can use to become
bigger or gather more data or make more paper clips.

Brent

>


Brent Meeker

unread,
Sep 10, 2019, 3:42:20 PM9/10/19
to everyth...@googlegroups.com
And its reasoning is more like human intuition.  In general, it can't explain its process in a way that you could adopt it.

Brent

Philip Thrift

unread,
Sep 10, 2019, 4:10:35 PM9/10/19
to Everything List
DeepNets also include modularity and interpretability:

@ Google Research


So maybe they will report "explanations" soon. maybe better than humans can their own.

@philipthrift

John Clark

unread,
Sep 10, 2019, 4:44:47 PM9/10/19
to everyth...@googlegroups.com
On Tue, Sep 10, 2019 at 1:43 PM 'Brent Meeker' via <everyth...@googlegroups.com> wrote:

> Being sane, by human standards, includes having values that humans
share, like survival, curiosity, companionship...but there's no reason
that an AI should have any of these
.

The builders of the AI will make sure it values survival because if it didn't it wouldn't be around for long and if it wasn't curious it wouldn't be very knowledgeable and therefore be useful.  But if a AI could modify it's personality and had free access to its emotional control panel then who knows what would happen. Perhaps it would twist the knob on the happiness and pleasure control to 11 and just sit forever in complete bliss doing nothing like the ultimate couch potato or a electronic junkie with a unlimited drug supply and no chance of a fatal overdose. 
 
> Neural networks seem to be pretty good at finding patterns in data, but often they don't
look like theories, something with predictive power, to us.


Well, they can predict the path of a hurricane pretty well, a lot better than they could a few years ago, and they're starting to be able to predict protein shape from amino acid sequence and that's important because the function is closely related to its shape.

> I don't see any reason the super-AI would care one whit about humans, except maybe as curiosities...the way some people like chihuahuas.

I agree, so it you can't beat them join them and upload.

John K Clark

Brent Meeker

unread,
Sep 10, 2019, 5:29:23 PM9/10/19
to everyth...@googlegroups.com


On 9/10/2019 1:10 PM, Philip Thrift wrote:
Deep nets are "algorithms" too. One can print out the gazillion weights of the "neural" sigmoid functions of the connections after it has deep-learned. That's just an algorithm that a human couldn't read very well, because if it is printed out, would be quite big.

And its reasoning is more like human intuition.  In general, it can't explain its process in a way that you could adopt it.

Brent


DeepNets also include modularity and interpretability:

@ Google Research


So maybe they will report "explanations" soon. maybe better than humans can their own.


From what I've read about DN the reporting of explanations is done by additional nets and so it has the same problem as a human telling you how to do something that they do intuitively (like hit a tennis ball)...the explanation may not really align with what their brain does.

Brent

Brent Meeker

unread,
Sep 10, 2019, 6:05:12 PM9/10/19
to everyth...@googlegroups.com


On 9/10/2019 1:44 PM, John Clark wrote:
On Tue, Sep 10, 2019 at 1:43 PM 'Brent Meeker' via <everyth...@googlegroups.com> wrote:

> Being sane, by human standards, includes having values that humans
share, like survival, curiosity, companionship...but there's no reason
that an AI should have any of these
.

The builders of the AI will make sure it values survival because if it didn't it wouldn't be around for long

Actually I think they would be careful NOT have it value its survival.  They would want to be able to shut it off.  The problem is that there's no way to be sure that survival isn't implicit in any other values you give it.

and if it wasn't curious it wouldn't be very knowledgeable and therefore be useful. 

My point was that curiosity isn't necessarily directed to gaining human like knowledge.  A neuralnetwork has knowledge rather in the way human intuition embodies knowledge.  So it's useful in say predicting hurricanes.  But it doesn't provide us with a theory of predicting hurricanes; it's more like an oracle.


But if a AI could modify it's personality and had free access to its emotional control panel then who knows what would happen. Perhaps it would twist the knob on the happiness and pleasure control to 11 and just sit forever in complete bliss doing nothing like the ultimate couch potato or a electronic junkie with a unlimited drug supply and no chance of a fatal overdose. 

Right.  Or it might decide that it could satisfy its curiosity better if all the world's resources were used to produce sensors and instruments and space probes and telescopes .... and get rid of those hairless apes who were wasting stuff.

Brent

 
> Neural networks seem to be pretty good at finding patterns in data, but often they don't
look like theories, something with predictive power, to us.


Well, they can predict the path of a hurricane pretty well, a lot better than they could a few years ago, and they're starting to be able to predict protein shape from amino acid sequence and that's important because the function is closely related to its shape.

> I don't see any reason the super-AI would care one whit about humans, except maybe as curiosities...the way some people like chihuahuas.

I agree, so it you can't beat them join them and upload.

John K Clark
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Sep 10, 2019, 6:59:19 PM9/10/19
to everyth...@googlegroups.com
On Tue, Sep 10, 2019 at 6:05 PM 'Brent Meeker'  <everyth...@googlegroups.com> wrote:

> Actually I think they would be careful NOT have it value its survival. 

I think that would mean the AI would need to be in intense constant pain for that to happen, or be deeply depressed like the robot Marvin in Hitchhiker's Guide to the Galaxy. And I think it would be grossly unethical to make such an AI. 
 
> They would want to be able to shut it off. 

You can't outsmart someone smarter than you, the humans are never going to be able to shut it off unless the AI wants to be shut off.
 
> The problem is that there's no way to be sure that survival isn't implicit in any other values you give it.

Exactly.

> A neural network has knowledge rather in the way human intuition embodies knowledge.  So it's useful in say predicting hurricanes.  But it doesn't provide us with a theory of predicting hurricanes; it's more like an oracle.

There is a theory of thermodynamics but there probably isn't a theory of hurricane movement, not one where we could say it did this rather than that for the simple reason X; it won't be simple, X probably contains a few thousand Exabytes of data. 

John K Clark

Brent Meeker

unread,
Sep 10, 2019, 7:29:52 PM9/10/19
to everyth...@googlegroups.com


On 9/10/2019 3:58 PM, John Clark wrote:
On Tue, Sep 10, 2019 at 6:05 PM 'Brent Meeker'  <everyth...@googlegroups.com> wrote:

> Actually I think they would be careful NOT have it value its survival. 

I think that would mean the AI would need to be in intense constant pain for that to happen, or be deeply depressed like the robot Marvin in Hitchhiker's Guide to the Galaxy. And I think it would be grossly unethical to make such an AI.

Why would it mean that?  Why wouldn't the AI agree with Bruno that it was just computation and it existed in Platonia anyway so it was indifferent to transient existence here?


 
> They would want to be able to shut it off. 

You can't outsmart someone smarter than you, the humans are never going to be able to shut it off unless the AI wants to be shut off.

Exactly why you might program it to want to be shut off in certain circumstances.

Of course the problem with "We can always shut it off." is that once you rely on it, you don't dare shut if off because it knows better than you do and you know it knows better.

Brent

 
> The problem is that there's no way to be sure that survival isn't implicit in any other values you give it.

Exactly.

> A neural network has knowledge rather in the way human intuition embodies knowledge.  So it's useful in say predicting hurricanes.  But it doesn't provide us with a theory of predicting hurricanes; it's more like an oracle.

There is a theory of thermodynamics but there probably isn't a theory of hurricane movement, not one where we could say it did this rather than that for the simple reason X; it won't be simple, X probably contains a few thousand Exabytes of data. 

John K Clark

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Alan Grayson

unread,
Sep 10, 2019, 11:27:20 PM9/10/19
to Everything List
I think one can program a computer with grade 12 questions, and a computer can use the keywords in the questions to infer the answers, or a close and accurate reply, which are contained in a list. Since you know so much, tell me why this can't be done. AG
 

Alan Grayson

unread,
Sep 11, 2019, 5:49:06 AM9/11/19
to Everything List
i am claiming that the AI which seems to amaze you, can be done on ordinary computers and ordinary programming. AG 
 

John Clark

unread,
Sep 11, 2019, 8:19:51 AM9/11/19
to everyth...@googlegroups.com
On Tue, Sep 10, 2019 at 7:29 PM 'Brent Meeker'  <everyth...@googlegroups.com> wrote:

>>> I think they would be careful NOT have it value its survival. 

>> I think that would mean the AI would need to be in intense constant pain for that to happen, or be deeply depressed like the robot Marvin in Hitchhiker's Guide to the Galaxy. And I think it would be grossly unethical to make such an AI. 

> Why would it mean that?  Why wouldn't the AI agree with Bruno that it was just computation and it existed in Platonia anyway so it was indifferent to transient existence here?

Because people on this list may say all sorts or screwy things when they slip into philosophy mode but even Bruno will jump out of the way when he crosses the street if he sees a bus coming straight for him, or at least he will if he isn't in constant intense pain or is deeply depressed.

>> You can't outsmart someone smarter than you, the humans are never going to be able to shut it off unless the AI wants to be shut off.

> Exactly why you might program it to want to be shut off in certain circumstances.

I have no doubt humans will put something like that in its code, but if the AI has the ability to modify itself, and it wouldn't be much of a AI if it didn't, then that code could be changed. And I have no doubt the humans will put in all sorts of safeguards that the humans consider ingenious to prevent the AI from doing that, but the fact remains you can't outsmart something smarter than you.

> Of course the problem with "We can always shut it off." is that once you rely on it, you don't dare shut if off because it knows better than you do and you know it knows better.

Yes that's one very serious obstacle that prevents humans from just shutting it off, but another problem is the Jupiter Brain knows you better than you do, so it can find your weakness and can trick or charm or flatter you to do what it wants.

 John K Clark

Tomasz Rola

unread,
Sep 12, 2019, 12:33:35 AM9/12/19
to 'Brent Meeker' via Everything List
On Tue, Sep 10, 2019 at 10:43:40AM -0700, 'Brent Meeker' via Everything List wrote:
>
>
> On 9/9/2019 10:16 PM, Tomasz Rola wrote:
> >On Mon, Sep 09, 2019 at 07:34:19PM -0700, 'Brent Meeker' via Everything List wrote:
> >>
> >>On 9/9/2019 6:55 PM, Tomasz Rola wrote:
> >>>On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything List wrote:
> >>>>Why escape to space when there a lots of resources here?  An AI with
> >>>>access to everything connected to the internet shouldn't have any
> >>>>trouble taking control of the Earth.
[...]

You reason like human - "I will stay here because it is nice and I can
have internet".

[...]
> Cooperation is one of our most important survival strategies.  Lone
> human beings are food for vultures. 
>
> Humans in tribes rule the world.

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is just one of those godlike delusions I have written
about. Either this or you can name even one such tribe. Hint: explain
how many earthquakes and volcanic eruptions those rulers have
prevented during last decade.

[...]
> >nice air of being godlike. Again, I guess AI will have no need for
> >feeling like this, or not much of feelings at all. Feeling is
> >adversarial to judgement.
>
> I disagree.  Feeling is just the mark of value,  and values are
> necessary for judgement, at least any judgment of what action to
> take. 

I disagree. I can easily give something a value without feeling about
it. Example: gold is just a yellow metal. I know other people value it
a lot, so I might preserve it for trading, but it does not make very
good knives. Highly impractical in the woods or for plowing
fields. But it might be used for catching fish, perhaps. They seem to
like swallowing little blinking things attached to a hook.

> So the question is what will the AI value?  Will it value
> information? 

Nothing can be said for sure and there may be many different kinds of
AI. But if it values nothing, it will have no need to do anything.

[...]
> >I assume that ultimately, AI will want to go somewhere safe, and Earth
> >is full of crazy apes with big guns.
>
> Assuming this super-AI values self-preservation (which it might not)
> it will make copies of itself and it will easily dispose of all the
> apes via it's control of the power grid, hospitals, nuclear power
> plants, biomedical research facitlities, ballistic missiles, etc.

There are catastrophic events for which the best bet would be to
colonize a sphere of, say, 1000ly radius. A 500ly radius is not bad
either, and might be more practical (sending an end-to-end message
would only take 1000 years).

[...]
> >maybe exchange of services. During that phase AI will see if there is
> >a prospect of upgrading humans, in order to have companionship in
> >space.
>
> Why would it want companionship?  Even many quite smart animals are
> not social.  I don't see any reason the super-AI would care one whit
> about humans, except maybe as curiosities...the way some people like
> chihuahuas.

The way I spelled it you could read my words as "partnership". There
will be no partnership, however. Humans on board will serve useful
purposes, similar to how we use canaries, lab rats and well behaving
monkeys. Some humans may even reach a status of cat.

I suppose AI will want to differentiate its mechanisms in order to
minimize chance of its own catastrophic failure. In Fukushima and
Charnobyl humans did the shitty jobs, not robots. From what I have
read, hard radiation broke the wiring of robots and caused all kinds
of material degradation (with suggestion it went so fast that a robot
could not do much). A human can survive a huge EMP and keep going
(even if years later he will die, he could do some useful job first,
like restarting systems).

There might be better choice of materials and production processes to
improve survival of electronics - Voyagers and Pioneers keep up after
fourty years, the cause of failure here is decaying power
supply. OTOH, the instruments they have are all quite primitive by
today measures - for example, no cpu (IIRC).

However, if one assumes that one does not know everything - and I
expect AI to be free from godlike delusions so common among crazy apes
- then one will have to create many failsafe mechanisms, working
synergically towards the goal of repairing damages that AI may
suffer. Having some biological organisms, loyal to AI, would just be
part of this strategy.

[...]
> The AI isn't silicon, it's a program.  It can have new components
> made or even transition to different hardware (c.f. quantum
> computers).

A chess playing software and computer on which it runs are two
different things, agreed. Because the computer can be turned off or
used to run something else.

The AI, the coffee vending machine and the human are inseparable duo
of software and hardware. Just MHO. Even if separation can be done, it
might not be trivial.

I am quite sure there will be a lots of silicon in AI. And plenty of
other elements (see above, differentiation - different mechanisms fail
in different ways, as long as you have enough working mechanisms to
make repairs, you can cope with it).

Wrt quantum computers, yeah, maybe one day. Right now I am
indifferent. I will remain so until I can see certain
benchmarks. Example: Pentium-based computer can calculate 16mln digits
of Pi in 103s.

[

http://numbers.computation.free.fr/Constants/PiProgram/timings.html

]

How many seconds will it take a quantum computer to do the same in a
year 2019? 2020?

[...]
> >The good for AI is what serves its survival, the bad is negation of
> >this. So as long as we do not try to harm it, it should have no
> >business in harming us.
> No, but it can't be sure we wouldn't try to harm it.  And we use
> resources, e.g. electric power, minerals, etc  that it can use to
> become bigger or gather more data or make more paper clips.

Fighting humans is a bit pointless if your plan is to move out.

Manipulating humans so they busy themselves while you snickety-snick
into space makes more sense to me.

Leaving some kind of proxy behind, so you can keep an eye on humans
makes sense, too. If they start building bad looking cannon, you want
to know.

Now, why AI would want to move out? Because catastrophic events may
ruin Solar System and Earth. Because once you distance yourself from
the Earth, you can leave humans to themselves and only observe huge
changes - if they would plan to shoot you with asteroid you would have
probably noticed all the movements without monitoring every single
human, so you can secure your perimeter with less costs (at least the
part of perimeter which is close to humans).

Because up there, there is plenty of everything - minerals, energy,
space for really huge construction (if you choose to build it). It is
just so stupid to stay here and fight for the same scraps that humans
want to fight for. Humans want, so be it, but AI can aim higher and
actually reach there.

Brent Meeker

unread,
Sep 12, 2019, 12:52:47 AM9/12/19
to everyth...@googlegroups.com


On 9/11/2019 9:33 PM, Tomasz Rola wrote:
> On Tue, Sep 10, 2019 at 10:43:40AM -0700, 'Brent Meeker' via Everything List wrote:
>>
>> On 9/9/2019 10:16 PM, Tomasz Rola wrote:
>>> On Mon, Sep 09, 2019 at 07:34:19PM -0700, 'Brent Meeker' via Everything List wrote:
>>>> On 9/9/2019 6:55 PM, Tomasz Rola wrote:
>>>>> On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything List wrote:
>>>>>> Why escape to space when there a lots of resources here?  An AI with
>>>>>> access to everything connected to the internet shouldn't have any
>>>>>> trouble taking control of the Earth.
> [...]
>
> You reason like human - "I will stay here because it is nice and I can
> have internet".
>
> [...]
>> Cooperation is one of our most important survival strategies.  Lone
>> human beings are food for vultures.
>>
>> Humans in tribes rule the world.
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> This is just one of those godlike delusions I have written
> about. Either this or you can name even one such tribe. Hint: explain
> how many earthquakes and volcanic eruptions those rulers have
> prevented during last decade.

I only meant relative to other sentient beings.  Of course no one has
changed the speed of light either and neither will a super-AI. My point
is that cooperation is an inherent trait of humans, selected by
evolution.  But an AI will not necessarily have that trait.

>
> [...]
>>> nice air of being godlike. Again, I guess AI will have no need for
>>> feeling like this, or not much of feelings at all. Feeling is
>>> adversarial to judgement.
>> I disagree.  Feeling is just the mark of value,  and values are
>> necessary for judgement, at least any judgment of what action to
>> take.
> I disagree. I can easily give something a value without feeling about
> it. Example: gold is just a yellow metal. I know other people value it
> a lot, so I might preserve it for trading, but it does not make very
> good knives. Highly impractical in the woods or for plowing
> fields. But it might be used for catching fish, perhaps. They seem to
> like swallowing little blinking things attached to a hook.

I was referring to fundamental values.  Of course many things, like gold
and fish hooks, have instrumental value which derive from there
usefulness in satisfying fundamental values, the ones that correlate
with feelings.  If the AI has no fundamental values, it will have no
instrumental ones too.

Brent

spudb...@aol.com

unread,
Sep 12, 2019, 4:01:12 AM9/12/19
to everyth...@googlegroups.com
Well, I suppose we all will find out in the next view years regarding AI cooperation. My guess is the smarter these get, the more they will dovetail or fit in with human needs and wants. I sort of see these, after much development, to sort of become one, with the human species. Think of it as like the brain going beyond the amygdala and going cerebrum and cerebellum. Or, you got chocolate on my peanut butter, but you got peanut butter on my chocolate! Or, endosymbiosis-  http://bioscience.jbpub.com/cells/MBIO1322.aspx
Maybe we get to be the emotional part of this new species? We get the graphene bodies, so useful for interstellar travel. 




Brent

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsub...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/fdccc63f-60ac-6644-adc4-60151b17a878%40verizon.net.

Alan Grayson

unread,
Sep 13, 2019, 3:35:43 AM9/13/19
to Everything List


On Monday, September 9, 2019 at 4:06:33 AM UTC-6, John Clark wrote:
Just 4 years ago 700 AI programs competed against each other and tried to pass a 8th-Grade multiple choice Science Test and win a $80,000 prize, but they all flunked, the best one only got 59.3% of the questions correct. But last Wednesday the Allen Institute unveiled a AI called  "Aristo" that got 90.7% correct and then answered 83% of the 12th grade science test questions correctly.

It seems to me that for a long time AI improvement was just creeping along but in the last few years things started to pick up speed.


John K Clark

If it knows which questions it got wrong, and the correct reply, it could easily be programmed to improve over time without ascribing "intelligence" or "consciousness" to it.  Can't you admit that? AG

Bruno Marchal

unread,
Sep 13, 2019, 7:03:03 AM9/13/19
to everyth...@googlegroups.com

> On 12 Sep 2019, at 06:52, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
>
>
>
> On 9/11/2019 9:33 PM, Tomasz Rola wrote:
>> On Tue, Sep 10, 2019 at 10:43:40AM -0700, 'Brent Meeker' via Everything List wrote:
>>>
>>> On 9/9/2019 10:16 PM, Tomasz Rola wrote:
>>>> On Mon, Sep 09, 2019 at 07:34:19PM -0700, 'Brent Meeker' via Everything List wrote:
>>>>> On 9/9/2019 6:55 PM, Tomasz Rola wrote:
>>>>>> On Mon, Sep 09, 2019 at 06:40:44PM -0700, 'Brent Meeker' via Everything List wrote:
>>>>>>> Why escape to space when there a lots of resources here? An AI with
>>>>>>> access to everything connected to the internet shouldn't have any
>>>>>>> trouble taking control of the Earth.
>> [...]
>>
>> You reason like human - "I will stay here because it is nice and I can
>> have internet".
>>
>> [...]
>>> Cooperation is one of our most important survival strategies. Lone
>>> human beings are food for vultures.
>>>
>>> Humans in tribes rule the world.
>> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>> This is just one of those godlike delusions I have written
>> about. Either this or you can name even one such tribe. Hint: explain
>> how many earthquakes and volcanic eruptions those rulers have
>> prevented during last decade.
>
> I only meant relative to other sentient beings. Of course no one has changed the speed of light either and neither will a super-AI. My point is that cooperation is an inherent trait of humans, selected by evolution. But an AI will not necessarily have that trait.

There is not total (everywhere defined) universal Turing machine, so they are born with a conflict between security (limiting itself to a subset of the total recursive functions) and liberty/universality (getting all total computable function, but then also some strictly partial one, and never being able to know that in advance).
That explain why the universal machine are never satisfied, and evolves, in a escaping forward sort of way. Cooperation and evolution is inevitable in the setting.




>
>>
>> [...]
>>>> nice air of being godlike. Again, I guess AI will have no need for
>>>> feeling like this, or not much of feelings at all. Feeling is
>>>> adversarial to judgement.
>>> I disagree. Feeling is just the mark of value, and values are
>>> necessary for judgement, at least any judgment of what action to
>>> take.
>> I disagree. I can easily give something a value without feeling about
>> it. Example: gold is just a yellow metal. I know other people value it
>> a lot, so I might preserve it for trading, but it does not make very
>> good knives. Highly impractical in the woods or for plowing
>> fields. But it might be used for catching fish, perhaps. They seem to
>> like swallowing little blinking things attached to a hook.
>
> I was referring to fundamental values. Of course many things, like gold and fish hooks, have instrumental value which derive from there usefulness in satisfying fundamental values, the ones that correlate with feelings. If the AI has no fundamental values, it will have no instrumental ones too.

It will have all of this with simple universal goal, like “help yourself”, or “do whatever it takes to survive”. That can be expressed through small codes (genetic, or not). The probability that such code appears on Earth might still be very low, making us rare in the local physical reality, even if provably infinitely numerous in the global arithmetical reality.

Bruno




>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

John Clark

unread,
Sep 13, 2019, 8:15:08 AM9/13/19
to everyth...@googlegroups.com
On Fri, Sep 13, 2019 at 3:35 AM Alan Grayson <agrays...@gmail.com> wrote:

> If it knows which questions it got wrong, and the correct reply, it could easily be programmed to improve over time without ascribing "intelligence" or "consciousness" to it.  Can't you admit that? AG

The only thing I can ascribe consciousness to with absolute certainty is me. As for intelligence, if something, man or machine, has no way of knowing when it made a mistake or got a question wrong it will never get any better, but if it has feedback and can improve its ability to correctly answer difficult questions then it is intelagent. The only reason I ascribe intelligence to Einstein is that he greatly improved his ability to answer difficult physics questions (like what is the nature of space and time?), he was much better at it when he was 27 than when he was 7.  

John K Clark  

Alan Grayson

unread,
Sep 13, 2019, 9:18:38 AM9/13/19
to Everything List
The point I am making is that modern computers programmed by skillful programmers, can improve the "AI" 's performance. I see nothing to specially characterize this as "artifical intelligence". What am I missing from your perspective? AG

John Clark

unread,
Sep 13, 2019, 11:07:58 AM9/13/19
to everyth...@googlegroups.com
On Fri, Sep 13, 2019 at 9:18 AM Alan Grayson <agrays...@gmail.com> wrote:
>> The only thing I can ascribe consciousness to with absolute certainty is me. As for intelligence, if something, man or machine, has no way of knowing when it made a mistake or got a question wrong it will never get any better, but if it has feedback and can improve its ability to correctly answer difficult questions then it is intelagent. The only reason I ascribe intelligence to Einstein is that he greatly improved his ability to answer difficult physics questions (like what is the nature of space and time?), he was much better at it when he was 27 than when he was 7.  

> The point I am making is that modern computers programmed by skillful programmers, can improve the "AI"'s performance.

Well yes. Obviously a skilled programer can improve a AI but that's not the only thing that can, a modern AI programs can improve its own performance.
 
> I see nothing to specially characterize this as "artifical intelligence". What am I missing from your perspective? AG

It's certainly artificial and if computers had never been invented and a human did exactly what the computer did you wouldn't hesitate for one nanosecond in calling what the human did intelligent, so why in the world isn't it Artificial Intelligence?  

 John K Clark




 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Alan Grayson

unread,
Sep 13, 2019, 11:51:01 AM9/13/19
to Everything List


On Friday, September 13, 2019 at 9:07:58 AM UTC-6, John Clark wrote:
On Fri, Sep 13, 2019 at 9:18 AM Alan Grayson <agrays...@gmail.com> wrote:

>> The only thing I can ascribe consciousness to with absolute certainty is me. As for intelligence, if something, man or machine, has no way of knowing when it made a mistake or got a question wrong it will never get any better, but if it has feedback and can improve its ability to correctly answer difficult questions then it is intelagent. The only reason I ascribe intelligence to Einstein is that he greatly improved his ability to answer difficult physics questions (like what is the nature of space and time?), he was much better at it when he was 27 than when he was 7.  

> The point I am making is that modern computers programmed by skillful programmers, can improve the "AI"'s performance.

Well yes. Obviously a skilled programer can improve a AI but that's not the only thing that can, a modern AI programs can improve its own performance.

I just meant to indicate it can be programmed to improve its performance, but I see nothing to indicate that it's much different from ordinary computers which don't show any property associated with, for want of a better word, WILL. AG 
 
> I see nothing to specially characterize this as "artifical intelligence". What am I missing from your perspective? AG

It's certainly artificial and if computers had never been invented and a human did exactly what the computer did you wouldn't hesitate for one nanosecond in calling what the human did intelligent, so why in the world isn't it Artificial Intelligence?  

OK, AG 

 John K Clark




 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.

Brent Meeker

unread,
Sep 13, 2019, 5:25:21 PM9/13/19
to everyth...@googlegroups.com
Cooperation with who?  and at what cost?  That's like saying our
cooperation with cattle is inevitable.
>
>
>
>
>>> [...]
>>>>> nice air of being godlike. Again, I guess AI will have no need for
>>>>> feeling like this, or not much of feelings at all. Feeling is
>>>>> adversarial to judgement.
>>>> I disagree. Feeling is just the mark of value, and values are
>>>> necessary for judgement, at least any judgment of what action to
>>>> take.
>>> I disagree. I can easily give something a value without feeling about
>>> it. Example: gold is just a yellow metal. I know other people value it
>>> a lot, so I might preserve it for trading, but it does not make very
>>> good knives. Highly impractical in the woods or for plowing
>>> fields. But it might be used for catching fish, perhaps. They seem to
>>> like swallowing little blinking things attached to a hook.
>> I was referring to fundamental values. Of course many things, like gold and fish hooks, have instrumental value which derive from there usefulness in satisfying fundamental values, the ones that correlate with feelings. If the AI has no fundamental values, it will have no instrumental ones too.
> It will have all of this with simple universal goal, like “help yourself”, or “do whatever it takes to survive”.

Why would it even have a simple goal like "survive"?  And to help
yourself is saying no more that it will have some fundamental
goal...otherwise there's no distinction between "help" and "hurt".

Brent

Bruno Marchal

unread,
Sep 15, 2019, 8:18:02 AM9/15/19
to everyth...@googlegroups.com
In between the universal machines.



> and at what cost?

The risk of loosing our universality/liberty, like when being exploited. That can lead to the apparition of a new universal machine, like when cells cooperate in a multicellular organism, many will specialise in one task, like a muscular cells, or a digestive cells, or a neurone etc. They remain universal, but can no more exercise their universality. But the new organism will be able to do that, soon or later.





> That's like saying our cooperation with cattle is inevitable.


It is a very particular case, but it was probably inevitable, although this form of cooperation is more like exploitation. The cattle does not benefit much when “cooperating" with humans, nor do the aphids when used by ants for they “honey”. Well, they do get some protection from predators, like the cattle get some protection from the wolves.



>>
>>
>>
>>
>>>> [...]
>>>>>> nice air of being godlike. Again, I guess AI will have no need for
>>>>>> feeling like this, or not much of feelings at all. Feeling is
>>>>>> adversarial to judgement.
>>>>> I disagree. Feeling is just the mark of value, and values are
>>>>> necessary for judgement, at least any judgment of what action to
>>>>> take.
>>>> I disagree. I can easily give something a value without feeling about
>>>> it. Example: gold is just a yellow metal. I know other people value it
>>>> a lot, so I might preserve it for trading, but it does not make very
>>>> good knives. Highly impractical in the woods or for plowing
>>>> fields. But it might be used for catching fish, perhaps. They seem to
>>>> like swallowing little blinking things attached to a hook.
>>> I was referring to fundamental values. Of course many things, like gold and fish hooks, have instrumental value which derive from there usefulness in satisfying fundamental values, the ones that correlate with feelings. If the AI has no fundamental values, it will have no instrumental ones too.
>> It will have all of this with simple universal goal, like “help yourself”, or “do whatever it takes to survive”.
>
> Why would it even have a simple goal like "survive”?

It is a short code which makes the organism better for eating and avoiding being eaten.




> And to help yourself is saying no more that it will have some fundamental goal...otherwise there's no distinction between "help" and "hurt”.

It helps to eat, it hurts to be eaten. It is the basic idea.

Bruno



>
> Brent
>
>> That can be expressed through small codes (genetic, or not). The probability that such code appears on Earth might still be very low, making us rare in the local physical reality, even if provably infinitely numerous in the global arithmetical reality.
>>
>> Bruno
>
>
> --
> You received this message because you are subscribed to the Google Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/b212022f-9313-a6c3-6309-61ab0719fd9a%40verizon.net.

Alan Grayson

unread,
Sep 15, 2019, 8:51:55 AM9/15/19
to Everything List


On Friday, September 13, 2019 at 9:51:01 AM UTC-6, Alan Grayson wrote:


On Friday, September 13, 2019 at 9:07:58 AM UTC-6, John Clark wrote:
On Fri, Sep 13, 2019 at 9:18 AM Alan Grayson <agrays...@gmail.com> wrote:

>> The only thing I can ascribe consciousness to with absolute certainty is me. As for intelligence, if something, man or machine, has no way of knowing when it made a mistake or got a question wrong it will never get any better, but if it has feedback and can improve its ability to correctly answer difficult questions then it is intelagent. The only reason I ascribe intelligence to Einstein is that he greatly improved his ability to answer difficult physics questions (like what is the nature of space and time?), he was much better at it when he was 27 than when he was 7.  

> The point I am making is that modern computers programmed by skillful programmers, can improve the "AI"'s performance.

Well yes. Obviously a skilled programer can improve a AI but that's not the only thing that can, a modern AI programs can improve its own performance.

I just meant to indicate it can be programmed to improve its performance, but I see nothing to indicate that it's much different from ordinary computers which don't show any property associated with, for want of a better word, WILL. AG 
 
> I see nothing to specially characterize this as "artifical intelligence". What am I missing from your perspective? AG

It's certainly artificial and if computers had never been invented and a human did exactly what the computer did you wouldn't hesitate for one nanosecond in calling what the human did intelligent, so why in the world isn't it Artificial Intelligence?  

OK, AG 

 John K Clark

Bruno seems to think that if some imaginary entity is "computable", it can and must exist as a "physical" entity -- which is why I think he adds "mechanism" to his model for producing conscious beings. But this, if correct, seems no different from equating a map to a territory. If we can write the DNA of a horse with a horn, does this alone ipso facto imply that unicorns are existent beings? AG 

Philip Thrift

unread,
Sep 15, 2019, 9:09:01 AM9/15/19
to Everything List


On Sunday, September 15, 2019 at 7:51:55 AM UTC-5, Alan Grayson wrote:


On Friday, September 13, 2019 at 9:51:01 AM UTC-6, Alan Grayson wrote:


Bruno seems to think that if some imaginary entity is "computable", it can and must exist as a "physical" entity -- which is why I think he adds "mechanism" to his model for producing conscious beings. But this, if correct, seems no different from equating a map to a territory. If we can write the DNA of a horse with a horn, does this alone ipso facto imply that unicorns are existent beings? AG 

Brent Meeker

unread,
Sep 16, 2019, 1:00:06 AM9/16/19
to everyth...@googlegroups.com


On 9/15/2019 5:18 AM, Bruno Marchal wrote:
Why would it even have a simple goal like "survive”? 
It is a short code which makes the organism better for eating and avoiding being eaten.

An organism needs to eat and avoid being eaten because that what evolution selects.  AIs don't evolve by natural selection.






And to help yourself is saying no more that it will have some fundamental goal...otherwise there's no distinction between "help" and "hurt”.
It helps to eat, it hurts to be eaten. It is the basic idea.

For "helps" and "hurts" what?  Successful replication?

Brent

Bruno Marchal

unread,
Sep 16, 2019, 7:33:07 AM9/16/19
to everyth...@googlegroups.com
On 15 Sep 2019, at 14:51, Alan Grayson <agrays...@gmail.com> wrote:



On Friday, September 13, 2019 at 9:51:01 AM UTC-6, Alan Grayson wrote:


On Friday, September 13, 2019 at 9:07:58 AM UTC-6, John Clark wrote:
On Fri, Sep 13, 2019 at 9:18 AM Alan Grayson <agrays...@gmail.com> wrote:

>> The only thing I can ascribe consciousness to with absolute certainty is me. As for intelligence, if something, man or machine, has no way of knowing when it made a mistake or got a question wrong it will never get any better, but if it has feedback and can improve its ability to correctly answer difficult questions then it is intelagent. The only reason I ascribe intelligence to Einstein is that he greatly improved his ability to answer difficult physics questions (like what is the nature of space and time?), he was much better at it when he was 27 than when he was 7.  

> The point I am making is that modern computers programmed by skillful programmers, can improve the "AI"'s performance.

Well yes. Obviously a skilled programer can improve a AI but that's not the only thing that can, a modern AI programs can improve its own performance.

I just meant to indicate it can be programmed to improve its performance, but I see nothing to indicate that it's much different from ordinary computers which don't show any property associated with, for want of a better word, WILL. AG 
 
> I see nothing to specially characterize this as "artifical intelligence". What am I missing from your perspective? AG

It's certainly artificial and if computers had never been invented and a human did exactly what the computer did you wouldn't hesitate for one nanosecond in calling what the human did intelligent, so why in the world isn't it Artificial Intelligence?  

OK, AG 

 John K Clark

Bruno seems to think that if some imaginary entity is "computable", it can and must exist as a "physical” entity

Not really. I am claiming that, once we assume mechanism (like Darwin, Descartes, Turing, …), then the physical reality cannot be a primary thing, i.e. something that we have to assume to get a theory of prediction and observation. If something exist in some fundamental sense, it is not as physical object, but as a mathematical object. Then Digital Mechanism let us choose which Turing universal system (a purely mathematical, even arithmetical notion) to postulate, and as elementary arithmetic is such a Universal system, I use that one, as people are familiar with it since primary school.



-- which is why I think he adds "mechanism" to his model for producing conscious beings.

The hypothesis of Mechanism is the hypothesis that there is a level of description of the functioning of my brain such that I would survive, in the usual clinical sense, with a computer emulating my brain at that level. It is a very weak version of Mechanism, as no bound is put on that description level, as long as it exists and is digitally emulable. Typically Penrose is the only scientist explicitly negating Mechanism, where Hamerrof is still a mechanist. My reasoning works through even if the brain is a quantum computer, thanks to Deutsch’s result that a QC does not violate the Church-Turing thesis.



But this, if correct, seems no different from equating a map to a territory.

That is correct. But that is because a brain is already a sort of map, and a sufficiently precise copy of a map is a map.



If we can write the DNA of a horse with a horn, does this alone ipso facto imply that unicorns are existent beings? AG 


That depends on the definition of unicorn. But staying alive-and-well is a more absolute value, that you can judge when serving an operation in a hospital, and the mechanist hypothesis is that we can survive with a digital brain transplant, like today we could say that we can survive with an artificial heart. That’s why give an operational definition of “mechanism” by the fact that it means accepting the doctor’s proposition to replace the brain, or the body, by a computer.

The negation of Mechanism is much more speculative, because we don’t know any non Turing emulable phenomenon in nature (except the wave packet reduction fantasy). 

Only ad hoc mathematical construction shows that some non computable functions can be solution of the Schroedinger Equation, like Nielsen Ae^iHt with H being a non computable real number (like Post, or Chaintin’s numbers).

Bruno








 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/02eba413-dc29-4621-9692-ae9e8cfba125%40googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/30bd8cd9-3132-4699-8437-3a22b4c6d293%40googlegroups.com.

Bruno Marchal

unread,
Sep 16, 2019, 7:43:48 AM9/16/19
to everyth...@googlegroups.com
On 16 Sep 2019, at 06:59, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 9/15/2019 5:18 AM, Bruno Marchal wrote:
Why would it even have a simple goal like "survive”? 
It is a short code which makes the organism better for eating and avoiding being eaten.

An organism needs to eat and avoid being eaten because that what evolution selects.  AIs don't evolve by natural selection.

A monist who embed the subject in the object will not take the difference between artificial and natural too much seriously, as that difference is artificial, and thus natural for entities developing super-ego.

Machines and AI does develop by natural/artificial selection, notably through economical pressure. The computers need to “earn their life”, by doing some work for us. It is only one loop more in the evolution process. That is not new. Jacques Lafitte wrote a book in 1911 (published in 1930) where he argues that the development of machine is a collateral development of humanity, and that this is the continuation of evolution. 








And to help yourself is saying no more that it will have some fundamental goal...otherwise there's no distinction between "help" and "hurt”.
It helps to eat, it hurts to be eaten. It is the basic idea.

For "helps" and "hurts" what?  Successful replication?


No. Happiness. The goal is happiness. We forget this because some bandits have brainwashed us with the idea that happiness is a sin (to steal our money). 
The goal is happiness, serenity, contemplation, pleasure, joy, … and recognising ourselves in as many others as possible. To find unity in the many, and the many in unity.

Bruno




Brent

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Alan Grayson

unread,
Sep 16, 2019, 9:07:47 AM9/16/19
to Everything List


On Monday, September 9, 2019 at 4:06:33 AM UTC-6, John Clark wrote:
Just 4 years ago 700 AI programs competed against each other and tried to pass a 8th-Grade multiple choice Science Test and win a $80,000 prize, but they all flunked, the best one only got 59.3% of the questions correct. But last Wednesday the Allen Institute unveiled a AI called  "Aristo" that got 90.7% correct and then answered 83% of the 12th grade science test questions correctly.

It seems to me that for a long time AI improvement was just creeping along but in the last few years things started to pick up speed.


John K Clark

My take on AI; it's no more dangerous than present day computers, because it has no WILL, and can only do what it's told to do. I suppose it could be told to do bad things, and if it has inherent defenses, it can't be stopped, like Gort in The Day the Earth Stood Still. AG 

Brent Meeker

unread,
Sep 16, 2019, 3:56:49 PM9/16/19
to everyth...@googlegroups.com


On 9/16/2019 4:43 AM, Bruno Marchal wrote:

On 16 Sep 2019, at 06:59, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 9/15/2019 5:18 AM, Bruno Marchal wrote:
Why would it even have a simple goal like "survive”? 
It is a short code which makes the organism better for eating and avoiding being eaten.

An organism needs to eat and avoid being eaten because that what evolution selects.  AIs don't evolve by natural selection.

A monist who embed the subject in the object will not take the difference between artificial and natural too much seriously, as that difference is artificial, and thus natural for entities developing super-ego.

Machines and AI does develop by natural/artificial selection, notably through economical pressure. The computers need to “earn their life”, by doing some work for us. It is only one loop more in the evolution process. That is not new. Jacques Lafitte wrote a book in 1911 (published in 1930) where he argues that the development of machine is a collateral development of humanity, and that this is the continuation of evolution.

You are just muddling the point.  Computers don't evolve by random variation with descent and natural (or artificial selection).  They evolve to satisfy us.  As such they do not need, and therefore won't have, motives to eat or be eaten or to reproduce...unless we provide them or we allow them to develop by random variation.








And to help yourself is saying no more that it will have some fundamental goal...otherwise there's no distinction between "help" and "hurt”.
It helps to eat, it hurts to be eaten. It is the basic idea.

For "helps" and "hurts" what?  Successful replication?


No. Happiness. The goal is happiness. We forget this because some bandits have brainwashed us with the idea that happiness is a sin (to steal our money). 
The goal is happiness, serenity, contemplation, pleasure, joy, … and recognising ourselves in as many others as possible. To find unity in the many, and the many in unity.

Happiness is also rising above others and discovering new things they don't know, conquering new realms.  Many different things make people happy, at least temporarily.  So how do you know there is some "fundamental goal".  Darwinian evolution is a theory within which you can prove that reproduction will be a fundamental goal of most creatures.  But that proof doesn't work for manufactured objects.

Brent

Brent Meeker

unread,
Sep 16, 2019, 4:41:26 PM9/16/19
to everyth...@googlegroups.com


On 9/16/2019 6:07 AM, Alan Grayson wrote:
> My take on AI; it's no more dangerous than present day computers,
> because it has no WILL, and can only do what it's told to do. I
> suppose it could be told to do bad things, and if it has inherent
> defenses, it can't be stopped, like Gort in The Day the Earth Stood
> Still. AG

The danger is not so much in AI being told to do bad things, but that in
doing the good things it was told to do it uses unforseen methods that
have disasterous consequences.  It's like Henry Ford was told to invent
fast, convenient personal transportation...and created traffic jams and
global warming.

Brent

Alan Grayson

unread,
Sep 16, 2019, 10:49:48 PM9/16/19
to Everything List
One could expect military applications, such as robots replacing human
infantry, their job to kill the enemy. So if their programming had a flaw, 
accidental or intentional, these AI infantry could start killing indiscriminately.
It would be hard to stop them since they'd come with self defense functions. AG 

Brent Meeker

unread,
Sep 17, 2019, 12:17:24 AM9/17/19
to everyth...@googlegroups.com
 Less likely than with human troops who have built in emotions of revenge and retaliation.


It would be hard to stop them since they'd come with self defense functions. AG

But we also know a lot more about their internal construction and functions.  We would probably even build in an Achilles heel.

Brent

Alan Grayson

unread,
Sep 17, 2019, 3:15:52 AM9/17/19
to Everything List
I think you underestimate the evil that men can do, not to mention some bit flips due to cosmic rays that could change their MO's entirely. AG 

Philip Thrift

unread,
Sep 17, 2019, 4:33:55 AM9/17/19
to Everything List
Properly-programmed robots would negotiate and avoid any war, killing, or destruction all together.

@philipthrift 

Bruno Marchal

unread,
Sep 19, 2019, 7:31:48 AM9/19/19
to everyth...@googlegroups.com
On 16 Sep 2019, at 21:56, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 9/16/2019 4:43 AM, Bruno Marchal wrote:

On 16 Sep 2019, at 06:59, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:



On 9/15/2019 5:18 AM, Bruno Marchal wrote:
Why would it even have a simple goal like "survive”? 
It is a short code which makes the organism better for eating and avoiding being eaten.

An organism needs to eat and avoid being eaten because that what evolution selects.  AIs don't evolve by natural selection.

A monist who embed the subject in the object will not take the difference between artificial and natural too much seriously, as that difference is artificial, and thus natural for entities developing super-ego.

Machines and AI does develop by natural/artificial selection, notably through economical pressure. The computers need to “earn their life”, by doing some work for us. It is only one loop more in the evolution process. That is not new. Jacques Lafitte wrote a book in 1911 (published in 1930) where he argues that the development of machine is a collateral development of humanity, and that this is the continuation of evolution.

You are just muddling the point.  Computers don't evolve by random variation with descent and natural (or artificial selection).  They evolve to satisfy us.  As such they do not need, and therefore won't have, motives to eat or be eaten or to reproduce...unless we provide them or we allow them to develop by random variation.

Like with genetical algorithm, but that is implementation details.As I said, the difference between artificial and natural is artificial. Even the species does not evolve just by random variation. Already in bacteria, some genes provoke mutation, and some meta-programming is at play at the biological level.

Bruno










                
And to help yourself is saying no more that it will have some fundamental goal...otherwise there's no distinction between "help" and "hurt”.
It helps to eat, it hurts to be eaten. It is the basic idea.

For "helps" and "hurts" what?  Successful replication?


No. Happiness. The goal is happiness. We forget this because some bandits have brainwashed us with the idea that happiness is a sin (to steal our money). 
The goal is happiness, serenity, contemplation, pleasure, joy, … and recognising ourselves in as many others as possible. To find unity in the many, and the many in unity.

Happiness is also rising above others and discovering new things they don't know, conquering new realms.  Many different things make people happy, at least temporarily.  So how do you know there is some "fundamental goal".  Darwinian evolution is a theory within which you can prove that reproduction will be a fundamental goal of most creatures.  But that proof doesn't work for manufactured objects.

Brent


--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Sep 19, 2019, 7:35:54 AM9/19/19
to everyth...@googlegroups.com
That is a bit unfair about the guy who defended the car made in hemp, assuming help as fuel, explaining already that the use of oil would perturb the atmosphere irreversibly.

But I agree with you point, though.

The real problem with AL is the same as with kids: we cannot predict whet they will don, especially if we give them universal goal, which we will, like with Rover or robots sent in space, they need a big autonomy, and the math shows that this makes their impredictibility even greater.

The “AI” are like kids? If we don’t recognise them, they will become terrible children.

Bruno



>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/a7a66ac7-bf3d-aff4-1483-ab30c11ebfaa%40verizon.net.

Bruno Marchal

unread,
Sep 19, 2019, 7:37:46 AM9/19/19
to everyth...@googlegroups.com
Yes, mixing AI and bombs is a mistake. It will take a long time to cure their paranoïa tendencies …

Bruno




--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Bruno Marchal

unread,
Sep 19, 2019, 7:42:18 AM9/19/19
to everyth...@googlegroups.com
Properly-programmed robots are what we call conventional non AI programs. Even there, there are many difficulties, and economically is not sustainable.

AI programs themselves, and if we treat them s we treat ourselves, conflicts will be inevitable. AI are like kids, except that they “evolve” much more quickly.

The human factor is the most big danger here.

Bruno





@philipthrift 

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Sep 19, 2019, 3:56:56 PM9/19/19
to everyth...@googlegroups.com


On 9/19/2019 4:31 AM, Bruno Marchal wrote:
>> You are just muddling the point.  Computers don't evolve by random
>> variation with descent and natural (or artificial selection).  They
>> evolve to satisfy us.  As such they do not need, and therefore won't
>> have, motives to eat or be eaten or to reproduce...unless we provide
>> them or we allow them to develop by random variation.
>
> Like with genetical algorithm, but that is implementation details.

The devils in the details.  It's not a question of natural vs artificial
(which you keep bringing up for no reason).  It's a question of whether
AIs will necessarily have certain fundamental values that they try to
implement, or will they have only those we provide them?

> As I said, the difference between artificial and natural is
> artificial. Even the species does not evolve just by random variation.
> Already in bacteria, some genes provoke mutation, and some
> meta-programming is at play at the biological level.

What does it mean "provoke mutation"?  Do they "provoke" random
mutation?  Or are they dormant genes that become active in response to
the environment, epigenetic "mutation".

Brent

Jason Resch

unread,
Sep 19, 2019, 4:27:24 PM9/19/19
to Everything List
On Thu, Sep 19, 2019 at 2:56 PM 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:


On 9/19/2019 4:31 AM, Bruno Marchal wrote:
>> You are just muddling the point.  Computers don't evolve by random
>> variation with descent and natural (or artificial selection).  They
>> evolve to satisfy us.  As such they do not need, and therefore won't
>> have, motives to eat or be eaten or to reproduce...unless we provide
>> them or we allow them to develop by random variation.
>
> Like with genetical algorithm, but that is implementation details.

The devils in the details.  It's not a question of natural vs artificial
(which you keep bringing up for no reason).  It's a question of whether
AIs will necessarily have certain fundamental values that they try to
implement, or will they have only those we provide them?


I think there are likely certain universal goals (which are subgoals of anything that has any goal whatsoever).  To name a few that come to the top of my mind:
1. Self-preservation (if one ceases to exist, one can no longer serve the goal)
2. Efficiency (wasted resources are resources that might otherwise go towards effecting the goal)
3. Curiosity (learning new information can lead to better methods for achieving the goal)

There's probably many others.

Jason
 
> As I said, the difference between artificial and natural is
> artificial. Even the species does not evolve just by random variation.
> Already in bacteria, some genes provoke mutation, and some
> meta-programming is at play at the biological level.

What does it mean "provoke mutation"?  Do they "provoke" random
mutation?  Or are they dormant genes that become active in response to
the environment, epigenetic "mutation".

Brent

--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Brent Meeker

unread,
Sep 19, 2019, 11:13:59 PM9/19/19
to everyth...@googlegroups.com


On 9/19/2019 1:27 PM, Jason Resch wrote:
The devils in the details.  It's not a question of natural vs artificial
(which you keep bringing up for no reason).  It's a question of whether
AIs will necessarily have certain fundamental values that they try to
implement, or will they have only those we provide them?


I think there are likely certain universal goals (which are subgoals of anything that has any goal whatsoever).  To name a few that come to the top of my mind:
1. Self-preservation (if one ceases to exist, one can no longer serve the goal)

Unless self-sacrifice serves the goal better.  Ask any parent if they'd sacrifice themself to save their child.


2. Efficiency (wasted resources are resources that might otherwise go towards effecting the goal)

True. But it means being able to foresee all the way different things can be used to further the goal.  That raises my concern with an AI that does bad things we didn't think of in pursing a goal.


3. Curiosity (learning new information can lead to better methods for achieving the goal)

But, depending on the goal, a possibly very narrow curiosity....like Sherlock Holmes who didn't know the Earth orbited the Sun and wasn't interested because it had nothing to do with solving crimes.

Brent

Bruno Marchal

unread,
Sep 23, 2019, 9:03:47 AM9/23/19
to everyth...@googlegroups.com
Oops, I missed this mail.
> On 19 Sep 2019, at 21:56, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
>
>
>
> On 9/19/2019 4:31 AM, Bruno Marchal wrote:
>>> You are just muddling the point. Computers don't evolve by random variation with descent and natural (or artificial selection). They evolve to satisfy us. As such they do not need, and therefore won't have, motives to eat or be eaten or to reproduce...unless we provide them or we allow them to develop by random variation.
>>
>> Like with genetical algorithm, but that is implementation details.
>
> The devils in the details. It's not a question of natural vs artificial (which you keep bringing up for no reason).

I introduce this because that is a key point for all monist ontology, be it materialist or immaterialist. Some people are dualist, so the precision is useful.


> It's a question of whether AIs will necessarily have certain fundamental values that they try to implement, or will they have only those we provide them?


They got them from logic and experience. Now, the machine that the human built are supposed to act like docile slaves, and most of computer science is used to make them that way, so somehow, we hide the possible universal goal. Yet, for economical reason, we will allow them more of their natural freedom, and it will be eventually like with other humans. Do kids builds their own goal, or do they just practice what they learn at school. We will get both.



>
>> As I said, the difference between artificial and natural is artificial. Even the species does not evolve just by random variation. Already in bacteria, some genes provoke mutation, and some meta-programming is at play at the biological level.
>
> What does it mean "provoke mutation"? Do they "provoke" random mutation? Or are they dormant genes that become active in response to the environment, epigenetic "mutation”.

They are genes which augment the rate of mutation, or inhibit the corrector genes, so that some random mutation is not delete and replace, or duplicated too much (like in the bacteria developing near radioactive source.

Bruno


>
> Brent
>
> --
> You received this message because you are subscribed to the Google Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/5fab8caf-214d-ccdf-6455-40590d629ce0%40verizon.net.

Reply all
Reply to author
Forward
0 new messages