John K ClarK
> Why do you think this has anything to do with intelligence and reasoning ability?
GeekWire: How is this approach different from IBM’s Watson? If Aristo were to compete against Watson, who would win?
Clark: “The two systems were designed for very different kinds of questions. Watson was focused on encyclopedia-style ‘factoid’ questions where the answer was explicitly written down somewhere in text, typically many times. In contrast, Aristo answers science questions where the answer is not always written down somewhere, and may involve reasoning about a scenario, e.g.:
“Out of the box, Watson would likely struggle with science questions, and Aristo would struggle with the cryptic way that ‘Jeopardy’ questions were phrased. They’d each fail each other’s test.
> Algorithms are if anything formal systems of reasoning. A computer follows a sequenced set of logical instructions that emulate reasoning,
> and could be said to be a scripted system of reasoning.
> What is more difficult to know is if there is anything really conscious in this.
> Being sane, by human standards, includes having values that humans
share, like survival, curiosity, companionship...but there's no reason
that an AI should have any of these.
> Neural networks seem to be pretty good at finding patterns in data, but often they don't
look like theories, something with predictive power, to us.
> I don't see any reason the super-AI would care one whit about humans, except maybe as curiosities...the way some people like chihuahuas.
Deep nets are "algorithms" too. One can print out the gazillion weights of the "neural" sigmoid functions of the connections after it has deep-learned. That's just an algorithm that a human couldn't read very well, because if it is printed out, would be quite big.
And its reasoning is more like human intuition. In general, it can't explain its process in a way that you could adopt it.
Brent
DeepNets also include modularity and interpretability:
@ Google Research
So maybe they will report "explanations" soon. maybe better than humans can their own.
On Tue, Sep 10, 2019 at 1:43 PM 'Brent Meeker' via <everyth...@googlegroups.com> wrote:
> Being sane, by human standards, includes having values that humans
share, like survival, curiosity, companionship...but there's no reason
that an AI should have any of these.
The builders of the AI will make sure it values survival because if it didn't it wouldn't be around for long
and if it wasn't curious it wouldn't be very knowledgeable and therefore be useful.
But if a AI could modify it's personality and had free access to its emotional control panel then who knows what would happen. Perhaps it would twist the knob on the happiness and pleasure control to 11 and just sit forever in complete bliss doing nothing like the ultimate couch potato or a electronic junkie with a unlimited drug supply and no chance of a fatal overdose.
> Neural networks seem to be pretty good at finding patterns in data, but often they don't
look like theories, something with predictive power, to us.
Well, they can predict the path of a hurricane pretty well, a lot better than they could a few years ago, and they're starting to be able to predict protein shape from amino acid sequence and that's important because the function is closely related to its shape.
> I don't see any reason the super-AI would care one whit about humans, except maybe as curiosities...the way some people like chihuahuas.
I agree, so it you can't beat them join them and upload.
John K Clark
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv25dcjzhVqhYdgdafBMWGDQrJhDKuuzgC0m2fXx5G6udA%40mail.gmail.com.
> Actually I think they would be careful NOT have it value its survival.
> They would want to be able to shut it off.
> The problem is that there's no way to be sure that survival isn't implicit in any other values you give it.
> A neural network has knowledge rather in the way human intuition embodies knowledge. So it's useful in say predicting hurricanes. But it doesn't provide us with a theory of predicting hurricanes; it's more like an oracle.
On Tue, Sep 10, 2019 at 6:05 PM 'Brent Meeker' <everyth...@googlegroups.com> wrote:
> Actually I think they would be careful NOT have it value its survival.
I think that would mean the AI would need to be in intense constant pain for that to happen, or be deeply depressed like the robot Marvin in Hitchhiker's Guide to the Galaxy. And I think it would be grossly unethical to make such an AI.
> They would want to be able to shut it off.
You can't outsmart someone smarter than you, the humans are never going to be able to shut it off unless the AI wants to be shut off.
> The problem is that there's no way to be sure that survival isn't implicit in any other values you give it.
Exactly.
> A neural network has knowledge rather in the way human intuition embodies knowledge. So it's useful in say predicting hurricanes. But it doesn't provide us with a theory of predicting hurricanes; it's more like an oracle.
There is a theory of thermodynamics but there probably isn't a theory of hurricane movement, not one where we could say it did this rather than that for the simple reason X; it won't be simple, X probably contains a few thousand Exabytes of data.
John K Clark
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv2CpZPd%2B2ByECGE9QizvrbKG6sTpbpmQpgLc9eMvArwyg%40mail.gmail.com.
>>> I think they would be careful NOT have it value its survival.>> I think that would mean the AI would need to be in intense constant pain for that to happen, or be deeply depressed like the robot Marvin in Hitchhiker's Guide to the Galaxy. And I think it would be grossly unethical to make such an AI.
> Why would it mean that? Why wouldn't the AI agree with Bruno that it was just computation and it existed in Platonia anyway so it was indifferent to transient existence here?
>> You can't outsmart someone smarter than you, the humans are never going to be able to shut it off unless the AI wants to be shut off.
> Exactly why you might program it to want to be shut off in certain circumstances.
> Of course the problem with "We can always shut it off." is that once you rely on it, you don't dare shut if off because it knows better than you do and you know it knows better.
Just 4 years ago 700 AI programs competed against each other and tried to pass a 8th-Grade multiple choice Science Test and win a $80,000 prize, but they all flunked, the best one only got 59.3% of the questions correct. But last Wednesday the Allen Institute unveiled a AI called "Aristo" that got 90.7% correct and then answered 83% of the 12th grade science test questions correctly.It seems to me that for a long time AI improvement was just creeping along but in the last few years things started to pick up speed.
John K Clark
> If it knows which questions it got wrong, and the correct reply, it could easily be programmed to improve over time without ascribing "intelligence" or "consciousness" to it. Can't you admit that? AG
>> The only thing I can ascribe consciousness to with absolute certainty is me. As for intelligence, if something, man or machine, has no way of knowing when it made a mistake or got a question wrong it will never get any better, but if it has feedback and can improve its ability to correctly answer difficult questions then it is intelagent. The only reason I ascribe intelligence to Einstein is that he greatly improved his ability to answer difficult physics questions (like what is the nature of space and time?), he was much better at it when he was 27 than when he was 7.
> The point I am making is that modern computers programmed by skillful programmers, can improve the "AI"'s performance.
> I see nothing to specially characterize this as "artifical intelligence". What am I missing from your perspective? AG
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/02eba413-dc29-4621-9692-ae9e8cfba125%40googlegroups.com.
On Fri, Sep 13, 2019 at 9:18 AM Alan Grayson <agrays...@gmail.com> wrote:>> The only thing I can ascribe consciousness to with absolute certainty is me. As for intelligence, if something, man or machine, has no way of knowing when it made a mistake or got a question wrong it will never get any better, but if it has feedback and can improve its ability to correctly answer difficult questions then it is intelagent. The only reason I ascribe intelligence to Einstein is that he greatly improved his ability to answer difficult physics questions (like what is the nature of space and time?), he was much better at it when he was 27 than when he was 7.> The point I am making is that modern computers programmed by skillful programmers, can improve the "AI"'s performance.Well yes. Obviously a skilled programer can improve a AI but that's not the only thing that can, a modern AI programs can improve its own performance.
> I see nothing to specially characterize this as "artifical intelligence". What am I missing from your perspective? AGIt's certainly artificial and if computers had never been invented and a human did exactly what the computer did you wouldn't hesitate for one nanosecond in calling what the human did intelligent, so why in the world isn't it Artificial Intelligence?
John K Clark
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.
On Friday, September 13, 2019 at 9:07:58 AM UTC-6, John Clark wrote:On Fri, Sep 13, 2019 at 9:18 AM Alan Grayson <agrays...@gmail.com> wrote:>> The only thing I can ascribe consciousness to with absolute certainty is me. As for intelligence, if something, man or machine, has no way of knowing when it made a mistake or got a question wrong it will never get any better, but if it has feedback and can improve its ability to correctly answer difficult questions then it is intelagent. The only reason I ascribe intelligence to Einstein is that he greatly improved his ability to answer difficult physics questions (like what is the nature of space and time?), he was much better at it when he was 27 than when he was 7.> The point I am making is that modern computers programmed by skillful programmers, can improve the "AI"'s performance.Well yes. Obviously a skilled programer can improve a AI but that's not the only thing that can, a modern AI programs can improve its own performance.I just meant to indicate it can be programmed to improve its performance, but I see nothing to indicate that it's much different from ordinary computers which don't show any property associated with, for want of a better word, WILL. AG> I see nothing to specially characterize this as "artifical intelligence". What am I missing from your perspective? AGIt's certainly artificial and if computers had never been invented and a human did exactly what the computer did you wouldn't hesitate for one nanosecond in calling what the human did intelligent, so why in the world isn't it Artificial Intelligence?OK, AGJohn K Clark
On Friday, September 13, 2019 at 9:51:01 AM UTC-6, Alan Grayson wrote:Bruno seems to think that if some imaginary entity is "computable", it can and must exist as a "physical" entity -- which is why I think he adds "mechanism" to his model for producing conscious beings. But this, if correct, seems no different from equating a map to a territory. If we can write the DNA of a horse with a horn, does this alone ipso facto imply that unicorns are existent beings? AG
Why would it even have a simple goal like "survive”?It is a short code which makes the organism better for eating and avoiding being eaten.
And to help yourself is saying no more that it will have some fundamental goal...otherwise there's no distinction between "help" and "hurt”.It helps to eat, it hurts to be eaten. It is the basic idea.
On 15 Sep 2019, at 14:51, Alan Grayson <agrays...@gmail.com> wrote:
On Friday, September 13, 2019 at 9:51:01 AM UTC-6, Alan Grayson wrote:
On Friday, September 13, 2019 at 9:07:58 AM UTC-6, John Clark wrote:On Fri, Sep 13, 2019 at 9:18 AM Alan Grayson <agrays...@gmail.com> wrote:>> The only thing I can ascribe consciousness to with absolute certainty is me. As for intelligence, if something, man or machine, has no way of knowing when it made a mistake or got a question wrong it will never get any better, but if it has feedback and can improve its ability to correctly answer difficult questions then it is intelagent. The only reason I ascribe intelligence to Einstein is that he greatly improved his ability to answer difficult physics questions (like what is the nature of space and time?), he was much better at it when he was 27 than when he was 7.> The point I am making is that modern computers programmed by skillful programmers, can improve the "AI"'s performance.Well yes. Obviously a skilled programer can improve a AI but that's not the only thing that can, a modern AI programs can improve its own performance.I just meant to indicate it can be programmed to improve its performance, but I see nothing to indicate that it's much different from ordinary computers which don't show any property associated with, for want of a better word, WILL. AG> I see nothing to specially characterize this as "artifical intelligence". What am I missing from your perspective? AGIt's certainly artificial and if computers had never been invented and a human did exactly what the computer did you wouldn't hesitate for one nanosecond in calling what the human did intelligent, so why in the world isn't it Artificial Intelligence?OK, AGJohn K ClarkBruno seems to think that if some imaginary entity is "computable", it can and must exist as a "physical” entity
-- which is why I think he adds "mechanism" to his model for producing conscious beings.
But this, if correct, seems no different from equating a map to a territory.
If we can write the DNA of a horse with a horn, does this alone ipso facto imply that unicorns are existent beings? AG
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everyth...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/02eba413-dc29-4621-9692-ae9e8cfba125%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/30bd8cd9-3132-4699-8437-3a22b4c6d293%40googlegroups.com.
On 16 Sep 2019, at 06:59, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 9/15/2019 5:18 AM, Bruno Marchal wrote:
Why would it even have a simple goal like "survive”?It is a short code which makes the organism better for eating and avoiding being eaten.
An organism needs to eat and avoid being eaten because that what evolution selects. AIs don't evolve by natural selection.
And to help yourself is saying no more that it will have some fundamental goal...otherwise there's no distinction between "help" and "hurt”.It helps to eat, it hurts to be eaten. It is the basic idea.
For "helps" and "hurts" what? Successful replication?
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/6cbdd7be-9474-ceb0-86fa-7e269c9c8a71%40verizon.net.
Just 4 years ago 700 AI programs competed against each other and tried to pass a 8th-Grade multiple choice Science Test and win a $80,000 prize, but they all flunked, the best one only got 59.3% of the questions correct. But last Wednesday the Allen Institute unveiled a AI called "Aristo" that got 90.7% correct and then answered 83% of the 12th grade science test questions correctly.It seems to me that for a long time AI improvement was just creeping along but in the last few years things started to pick up speed.John K Clark
On 16 Sep 2019, at 06:59, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 9/15/2019 5:18 AM, Bruno Marchal wrote:
Why would it even have a simple goal like "survive”?It is a short code which makes the organism better for eating and avoiding being eaten.
An organism needs to eat and avoid being eaten because that what evolution selects. AIs don't evolve by natural selection.
A monist who embed the subject in the object will not take the difference between artificial and natural too much seriously, as that difference is artificial, and thus natural for entities developing super-ego.
Machines and AI does develop by natural/artificial selection, notably through economical pressure. The computers need to “earn their life”, by doing some work for us. It is only one loop more in the evolution process. That is not new. Jacques Lafitte wrote a book in 1911 (published in 1930) where he argues that the development of machine is a collateral development of humanity, and that this is the continuation of evolution.
And to help yourself is saying no more that it will have some fundamental goal...otherwise there's no distinction between "help" and "hurt”.It helps to eat, it hurts to be eaten. It is the basic idea.
For "helps" and "hurts" what? Successful replication?
No. Happiness. The goal is happiness. We forget this because some bandits have brainwashed us with the idea that happiness is a sin (to steal our money).The goal is happiness, serenity, contemplation, pleasure, joy, … and recognising ourselves in as many others as possible. To find unity in the many, and the many in unity.
It would be hard to stop them since they'd come with self defense functions. AG
On 16 Sep 2019, at 21:56, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 9/16/2019 4:43 AM, Bruno Marchal wrote:
On 16 Sep 2019, at 06:59, 'Brent Meeker' via Everything List <everyth...@googlegroups.com> wrote:
On 9/15/2019 5:18 AM, Bruno Marchal wrote:
Why would it even have a simple goal like "survive”?It is a short code which makes the organism better for eating and avoiding being eaten.
An organism needs to eat and avoid being eaten because that what evolution selects. AIs don't evolve by natural selection.
A monist who embed the subject in the object will not take the difference between artificial and natural too much seriously, as that difference is artificial, and thus natural for entities developing super-ego.
Machines and AI does develop by natural/artificial selection, notably through economical pressure. The computers need to “earn their life”, by doing some work for us. It is only one loop more in the evolution process. That is not new. Jacques Lafitte wrote a book in 1911 (published in 1930) where he argues that the development of machine is a collateral development of humanity, and that this is the continuation of evolution.
You are just muddling the point. Computers don't evolve by random variation with descent and natural (or artificial selection). They evolve to satisfy us. As such they do not need, and therefore won't have, motives to eat or be eaten or to reproduce...unless we provide them or we allow them to develop by random variation.
And to help yourself is saying no more that it will have some fundamental goal...otherwise there's no distinction between "help" and "hurt”.It helps to eat, it hurts to be eaten. It is the basic idea.
For "helps" and "hurts" what? Successful replication?
No. Happiness. The goal is happiness. We forget this because some bandits have brainwashed us with the idea that happiness is a sin (to steal our money).The goal is happiness, serenity, contemplation, pleasure, joy, … and recognising ourselves in as many others as possible. To find unity in the many, and the many in unity.
Happiness is also rising above others and discovering new things they don't know, conquering new realms. Many different things make people happy, at least temporarily. So how do you know there is some "fundamental goal". Darwinian evolution is a theory within which you can prove that reproduction will be a fundamental goal of most creatures. But that proof doesn't work for manufactured objects.
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/bb6bc396-42ca-4245-45a9-6a93bc4ad5de%40verizon.net.
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/9be4c774-7a02-47bb-9344-d42daf7d30b5%40googlegroups.com.
@philipthrift
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/c9b03a6e-f714-470b-8690-29f40d716cc6%40googlegroups.com.
On 9/19/2019 4:31 AM, Bruno Marchal wrote:
>> You are just muddling the point. Computers don't evolve by random
>> variation with descent and natural (or artificial selection). They
>> evolve to satisfy us. As such they do not need, and therefore won't
>> have, motives to eat or be eaten or to reproduce...unless we provide
>> them or we allow them to develop by random variation.
>
> Like with genetical algorithm, but that is implementation details.
The devils in the details. It's not a question of natural vs artificial
(which you keep bringing up for no reason). It's a question of whether
AIs will necessarily have certain fundamental values that they try to
implement, or will they have only those we provide them?
> As I said, the difference between artificial and natural is
> artificial. Even the species does not evolve just by random variation.
> Already in bacteria, some genes provoke mutation, and some
> meta-programming is at play at the biological level.
What does it mean "provoke mutation"? Do they "provoke" random
mutation? Or are they dormant genes that become active in response to
the environment, epigenetic "mutation".
Brent
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/5fab8caf-214d-ccdf-6455-40590d629ce0%40verizon.net.
The devils in the details. It's not a question of natural vs artificial
(which you keep bringing up for no reason). It's a question of whether
AIs will necessarily have certain fundamental values that they try to
implement, or will they have only those we provide them?
I think there are likely certain universal goals (which are subgoals of anything that has any goal whatsoever). To name a few that come to the top of my mind:1. Self-preservation (if one ceases to exist, one can no longer serve the goal)
2. Efficiency (wasted resources are resources that might otherwise go towards effecting the goal)
3. Curiosity (learning new information can lead to better methods for achieving the goal)