Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

AI?

90 views
Skip to first unread message

John Larkin

unread,
Jun 26, 2019, 11:11:21 AM6/26/19
to


https://www.technologyreview.com/s/613838/ai-climate-change-machine-learning/

Are there some fundamental new principles to Artificial Intelligence?
Seems to me like it's still just another heap of code.





--

John Larkin Highland Technology, Inc

lunatic fringe electronics

George Herold

unread,
Jun 26, 2019, 11:23:19 AM6/26/19
to
On Wednesday, June 26, 2019 at 11:11:21 AM UTC-4, John Larkin wrote:
> https://www.technologyreview.com/s/613838/ai-climate-change-machine-learning/
>
> Are there some fundamental new principles to Artificial Intelligence?
> Seems to me like it's still just another heap of code.
>
>
Don't know, Lex Fridman has a pretty good podcast... I've only watched a few
https://lexfridman.com/ai/

GH

bitrex

unread,
Jun 26, 2019, 1:04:02 PM6/26/19
to
On 6/26/19 11:11 AM, John Larkin wrote:
> Are there some fundamental new principles to Artificial Intelligence?
> Seems to me like it's still just another heap of code.
>
>

Prompt: "Are there some fundamental new principles to Artificial
Intelligence? Seems to me like it's still just another heap of code."

Completion:

"As they say, it's an API, it's just a way for us to create the
algorithm that will generate the result.

So when it's used say to create a game, it still just gives you the
algorithm. But when someone else comes up with this algorithm and you're
like, what is that different? And then they create their own game, they
have very different algorithms for using the AI. And then there are
companies and people that want to make AI-type games, who have no idea
either way where AI comes from or where the AI in a game is coming from.

It's kind of silly, and so it's really frustrating and I think that
developers are always on this path of just wanting to just see what
everyone else is doing, and I'm sure that they feel frustrated by that
as well. So the big one is that when AI is introduced into the
ecosystem, we need to recognize how different that is because of this
artificial complexity that developers now have to deal with and figure
out how to make it work better and in the same spirit of wanting it to
be fun but also making it practical."

...

The rest of your post was generated by an AI

bitrex

unread,
Jun 26, 2019, 1:12:14 PM6/26/19
to
On 6/26/19 11:11 AM, John Larkin wrote:
>
>
> https://www.technologyreview.com/s/613838/ai-climate-change-machine-learning/
>
> Are there some fundamental new principles to Artificial Intelligence?
> Seems to me like it's still just another heap of code.
>
>

Prompt: "In his next five years in office, Donald Trump will definitely
make America the greatest it has ever been!"

AI Completion:

He's gonna make America great again!

He's gonna make America great again, no one's going to hate his country,
it's the greatest country in the world.

Thank you, President Trump of the United States of America, if only The
Donald won't have to worry about the American people constantly trying
to kill him!

bitrex

unread,
Jun 26, 2019, 1:17:15 PM6/26/19
to
Semantic analysis, language-independent context, it "understands" the
subject of the prompt and not only generates grammatically correct text
but text on the appropriate topic with the right tone and that is
logically consistent

Tom Gardner

unread,
Jun 26, 2019, 2:06:46 PM6/26/19
to
On 26/06/19 16:11, John Larkin wrote:
>
>
> https://www.technologyreview.com/s/613838/ai-climate-change-machine-learning/
>
> Are there some fundamental new principles to Artificial Intelligence?
> Seems to me like it's still just another heap of code.

Yes there are fundamental differences...

The "code" is in the numbers defining various weights
inside the system. It is a neat example of Turing's
insight that there is no difference between data and
instructions - except in how you interpret the numbers.
Softies that only work at a high level typically don't
see that as being obvious.

The weights are neither designed nor understood by humans;
they emerge during training. The corollary is that nobody
knows /why/ something is behaving the way it is, nor
can they predict what input changes might produce unacceptable
outputs.

The latter behaviour is seen in wetware, but the wetware
is more likely to be able to explain why it is behaving
in a certain way. But even that isn't guaranteed, doubly
so when malice is considered.

John Doe

unread,
Jun 26, 2019, 2:50:46 PM6/26/19
to
John Larkin <jjla...@highlandtechnology.com> wrote:

> https://www.technologyreview.com/s/613838/ai-climate-change-machine-learning/
>
> Are there some fundamental new principles to Artificial Intelligence?
> Seems to me like it's still just another heap of code.

The first chess match in which a computer beat a world chess
champion, Garry Kasparov versus Deep Blue, it used brute force
processing power. Artificial intelligence is nothing but a silly
troll. Computers are tools, just like physical tools. Nobody
wants a computer to think for itself, any more than we would
want a physical tool to start doing stuff on its own.

You can teach a monkey to operate. The problem is teaching him
when not to operate.

George Herold

unread,
Jun 26, 2019, 3:22:52 PM6/26/19
to
You should read about Alpha Go and Alpha go zero.

George H.

bitrex

unread,
Jun 26, 2019, 3:30:25 PM6/26/19
to
"brute force" would mean the software searched the state space
exhaustively but it didn't do that it's impossible.

Martin Brown

unread,
Jun 27, 2019, 4:09:27 AM6/27/19
to
On 26/06/2019 16:11, John Larkin wrote:
>
>
> https://www.technologyreview.com/s/613838/ai-climate-change-machine-learning/
>
> Are there some fundamental new principles to Artificial Intelligence?

Yes. At least for certain types of problem which includes some
interpretation of large scale data and two player games like Go.

Google's Alpha-Go and the later Alpha-Go Zero was able to exceed 3000
years of accumulated human activity on the game in about 40 days from
just being given the rules playing against itself. It even found some
new joseki that had never before occurred in human games as a result.

https://www.telegraph.co.uk/science/2017/10/18/alphago-zero-google-deepmind-supercomputer-learns-3000-years/

When the guy behind the design of this program gave a talk in Cambridge
UK the queue to get in gridlocked parts of the city.

> Seems to me like it's still just another heap of code.

In some ways it is and in a scary side effect you can't dissect the
reasoning it has used to come to a final conclusion beyond that its
prediction is that one particular choice is optimal.

In medical diagnosis the edge cases are checked by a skilled human but
the clear negatives and clear positives can be dealt with by a robot
that never gets tired, distracted or has bad days. They are getting
almost as good as the best practitioners at some types of X-rays.

--
Regards,
Martin Brown

Martin Brown

unread,
Jun 27, 2019, 5:05:19 AM6/27/19
to
On 26/06/2019 19:50, John Doe wrote:
> John Larkin <jjla...@highlandtechnology.com> wrote:
>
>> https://www.technologyreview.com/s/613838/ai-climate-change-machine-learning/
>>
>> Are there some fundamental new principles to Artificial Intelligence?
>> Seems to me like it's still just another heap of code.
>
> The first chess match in which a computer beat a world chess
> champion, Garry Kasparov versus Deep Blue, it used brute force
> processing power. Artificial intelligence is nothing but a silly
> troll. Computers are tools, just like physical tools. Nobody
> wants a computer to think for itself, any more than we would
> want a physical tool to start doing stuff on its own.

That was true then but even so it had insights that made Kasparov
believe that the machine team was cheating. It is hard today to find a
serious chess engine running on a modern PC that cannot beat him!

Googles Alpha Go Zero has gone well beyond that point. It is capable of
inventing new strategies for a game ab initio given only the rules.

In a similar way it can learn to interpret bulk data and choose its own
hypotheses across huge multidimensional datasets gaining insights that
humans cannot hope to spot. If it finds something interesting then
humans can look at the output and try to interpret it.
>
> You can teach a monkey to operate. The problem is teaching him
> when not to operate.

There is one in the White House right now. The twit that tweets.

--
Regards,
Martin Brown

Martin Brown

unread,
Jun 27, 2019, 5:07:28 AM6/27/19
to
On 26/06/2019 20:30, bitrex wrote:
> On 6/26/19 3:22 PM, George Herold wrote:
>> On Wednesday, June 26, 2019 at 2:50:46 PM UTC-4, John Doe wrote:
>>> John Larkin <jjla...@highlandtechnology.com> wrote:
>>>
>>>> https://www.technologyreview.com/s/613838/ai-climate-change-machine-learning/
>>>>
>>>>
>>>> Are there some fundamental new principles to Artificial Intelligence?
>>>> Seems to me like it's still just another heap of code.
>>>
>>> The first chess match in which a computer beat a world chess
>>> champion, Garry Kasparov versus Deep Blue, it used brute force
>>> processing power. Artificial intelligence is nothing but a silly
>>> troll. Computers are tools, just like physical tools. Nobody
>>> wants a computer to think for itself, any more than we would
>>> want a physical tool to start doing stuff on its own.
>>>
>>> You can teach a monkey to operate. The problem is teaching him
>>> when not to operate.
>>
>> You should read about Alpha Go and Alpha go zero.
>
> "brute force" would mean the software searched the state space
> exhaustively but it didn't do that it's impossible.

It was using brute force of sorts with a chip level move generator per
square and a workstation coordinating their actions. It was a lot more
custom chip hardware to get the speed that made the difference. Allowing
very fast material evaluation. Obviously it also used the well known
alpha-beta heuristic (which essentially prunes all nodes where the
opponent has a clear advantage over some other choice you already have a
score for) as well as all the other heuristics known at the time.

--
Regards,
Martin Brown

Bill Sloman

unread,
Jun 27, 2019, 6:10:00 AM6/27/19
to
On Thursday, June 27, 2019 at 11:05:19 AM UTC+2, Martin Brown wrote:
> On 26/06/2019 19:50, John Doe wrote:
> > John Larkin <jjla...@highlandtechnology.com> wrote:
> >
> >> https://www.technologyreview.com/s/613838/ai-climate-change-machine-learning/
> >>
> >> Are there some fundamental new principles to Artificial Intelligence?
> >> Seems to me like it's still just another heap of code.
> >
> > The first chess match in which a computer beat a world chess
> > champion, Garry Kasparov versus Deep Blue, it used brute force
> > processing power. Artificial intelligence is nothing but a silly
> > troll. Computers are tools, just like physical tools. Nobody
> > wants a computer to think for itself, any more than we would
> > want a physical tool to start doing stuff on its own.
>
> That was true then but even so it had insights that made Kasparov
> believe that the machine team was cheating. It is hard today to find a
> serious chess engine running on a modern PC that cannot beat him!

It wasn't actually true. Deep Blue didn't search the complete tree of possibilities - the classic brute force approach - but rather a tree that got pruned as soon as any particular branch was clearly sub-optimal. The pruning algorithms had had a lot of work. Deep Blue still looked at a lot more branches than a chess master, but the tree was pruned well enough that it cloud look further ahead than Kasparov when it mattered.

> Googles Alpha Go Zero has gone well beyond that point. It is capable of
> inventing new strategies for a game ab initio given only the rules.
>
> In a similar way it can learn to interpret bulk data and choose its own
> hypotheses across huge multidimensional datasets gaining insights that
> humans cannot hope to spot. If it finds something interesting then
> humans can look at the output and try to interpret it.
> >
> > You can teach a monkey to operate. The problem is teaching him
> > when not to operate.
>
> There is one in the White House right now. The twit that tweets.

His problem is that he rarely seems realise that there is a tree of possibilities to search, and even when he does, he lacks the patience to sit around while somebody else searches it for him and gives him a list of more or less rational choices.

--
Bill Sloman, Sydney

Martin Brown

unread,
Jun 27, 2019, 7:50:44 AM6/27/19
to
On 27/06/2019 11:09, Bill Sloman wrote:
> On Thursday, June 27, 2019 at 11:05:19 AM UTC+2, Martin Brown wrote:
>> On 26/06/2019 19:50, John Doe wrote:
>>> John Larkin <jjla...@highlandtechnology.com> wrote:
>>>
>>>> https://www.technologyreview.com/s/613838/ai-climate-change-machine-learning/
>>>>
>>>>
>>>>
Are there some fundamental new principles to Artificial Intelligence?
>>>> Seems to me like it's still just another heap of code.
>>>
>>> The first chess match in which a computer beat a world chess
>>> champion, Garry Kasparov versus Deep Blue, it used brute force
>>> processing power. Artificial intelligence is nothing but a silly
>>> troll. Computers are tools, just like physical tools. Nobody
>>> wants a computer to think for itself, any more than we would want
>>> a physical tool to start doing stuff on its own.
>>
>> That was true then but even so it had insights that made Kasparov
>> believe that the machine team was cheating. It is hard today to
>> find a serious chess engine running on a modern PC that cannot beat
>> him!
>
> It wasn't actually true. Deep Blue didn't search the complete tree of
> possibilities - the classic brute force approach - but rather a tree
> that got pruned as soon as any particular branch was clearly
> sub-optimal. The pruning algorithms had had a lot of work.

The pruning algorithms were not all that different to existing state of
the art stuff though. What tipped the balance was that it has dedicated
chip level parallel hardware move generation and material balance score.
It was massively parallel in dedicated hardware - that is fundamentally
a brute force solution. The thing that it did exceptionally well was
search extensions when they were necessary to reach true quiessence.

Tsu's book "Behind Deep Blue" is an interesting read. Some of the
technical papers are online for example here:

https://www.mimuw.edu.pl/~ewama/zsi/deepBlue.pdf

Deep Blue
> still looked at a lot more branches than a chess master, but the tree
> was pruned well enough that it cloud look further ahead than Kasparov
> when it mattered.

It was a lot closer than that.

If Kasparov hadn't been spooked by some of its unusual human like moves
and had played like his best games he could probably have won the match.
Unfortunately he got annoyed with it and made some untypical for him
mistakes. It was a machine playing unlike any other machine before it.

--
Regards,
Martin Brown

bitrex

unread,
Jun 29, 2019, 3:14:11 PM6/29/19
to
I'd like a navigation app that took into account topography when
planning a route, for e.g. hybrid vehicles. the shortest route isn't
necessarily the most fuel-economical route. You'd think fleets would be
interested in something like that, maybe they already have it. I know in
some areas UPS optimizes routes to avoid left turns.

It would be nice if routes could be optimized for ease-of-driving. Waze
is sometimes too clever for its own good; it sees "intellectually" that
a certain route is in theory faster, but doesn't "know" that it's just
sent you down a route that involves several challenging turns or blind
corners and/or narrow streets that end up costing you time.

bitrex

unread,
Jun 29, 2019, 3:16:37 PM6/29/19
to
the latter seems like you'd have to have some kind of human-initiated
rating system initially to get data to aggregate driver experience from.
0 new messages