Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

"AI Helps AMD’s Ryzen Take on Intel"

184 views
Skip to first unread message

ga...@learn.bio

unread,
Jun 9, 2017, 9:56:20 PM6/9/17
to
In the spirit of the classic film Amazon Women on the Moon, let's play "Bullshit, or not?"...

http://www.electronicdesign.com/microprocessors/ai-helps-amd-s-ryzen-take-intel

"AMD employs a neural network in its branch prediction subsystem. It builds a model of the code being executed so that its Smart Prefetch can pre-load instructions and optimize the path through the processor pipeline. The neural network is designed to learn from the currently running applications rather than the predefined static analysis often used in other deep learning applications."

So, what say you comp.arch, is this bullshit, or not? It sounds a lot like something that was implemented in marketing to me.

G.

Melzzzzz

unread,
Jun 9, 2017, 9:58:56 PM6/9/17
to
Well, I heard of that, but haven't seen significant impact. Although
some clame that running same benchmark several times gives some
improvement.


--
press any key to continue or any other to quit...

George Neuner

unread,
Jun 10, 2017, 12:11:05 AM6/10/17
to
A branch predictor effectively is an associative memory that maps
branch locations to target locations. For a given number of mappings,
an ANN could be quite a bit smaller than an equivalent table, but it
would take longer to "learn" each mapping. And I also have doubts that
it would be much faster to retrieve them.

More importantly, ANN based associative memory does not "unlearn"
things either quickly or easily unless you flush *all* the data -
resetting the network to its initial state. And when you exceed the
"memory" capacity of an ANN, it does not necessary degrade gracefully
but instead can become erratic.

The unlearning problem can be mitigated somewhat by using several
smaller ANNs in parallel rather than a single large one. But the
"memory" capacity of an ANN scales exponentially with the size of the
network: a single 2N node network can encode far more mappings than
can two N node networks.

Then also consider that, to really be effective and improve on
existing table predictors, you need to context switch the ANN with the
program it models to avoid pollution by foreign code.
The unlearning problem again.


I suppose you could just punt and model whatever is the whole set of
running software at any given time, but the unlearning problem
combined with erratic behavior when/if the ANN's capacity is exceeded
makes this approach rather untenable.


Certainly it technically is possible, but I have to wonder whether it
can improve on table predictors enough to be worth the effort.

YMMV,
George

Ivan Godard

unread,
Jun 10, 2017, 12:31:32 AM6/10/17
to
Neural net predictors are routine in most modern chips; Google "neural
net branch prediction". Though calling established tech "AI" is
market-buzzing.

EricP

unread,
Jun 10, 2017, 12:55:55 AM6/10/17
to
I can't say if that is bull or not,
but a bit of rummaging about found this AMD 2010 patent

Combined level 1 and level 2 branch predictor
https://www.google.com/patents/US8788797

which references this 2002 ACM article on neural net branch predictors:

Neural methods for dynamic branch prediction 2002
http://dl.acm.org/citation.cfm?id=571639
http://taco.cse.tamu.edu/pdfs/tocs02.pdf

which has lots of references, dated circa 2005
https://scholar.google.com/scholar?q=%22Neural+Methods+for+Dynamic+Branch+Prediction%22

and with a bit of googling eventually get to a more recent paper,
though not mentioning AMD but at least explaining the concepts:

Using Binary Neural Networks for Hardware Branch Prediction 2016
https://www.researchgate.net/profile/Chase_Gaudet/publication/301804313_Using_Binary_Neural_Networks_for_Hardware_Branch_Prediction/links/5729067908ae057b0a033dc6.pdf

Eric


Quadibloc

unread,
Jun 10, 2017, 7:26:02 AM6/10/17
to
On Friday, June 9, 2017 at 7:56:20 PM UTC-6, ga...@learn.bio wrote:

> So, what say you comp.arch, is this bullshit, or not? It sounds a lot like
> something that was implemented in marketing to me.

The Perceptron, one of the earliest attempts at a neural network, is also a design
that has been used in branch predictors on several occasions, so the claims are
not likely to be inaccurate.

John Savard

Megol

unread,
Jun 10, 2017, 8:39:01 AM6/10/17
to
On Saturday, June 10, 2017 at 3:56:20 AM UTC+2, ga...@learn.bio wrote:
> In the spirit of the classic film Amazon Women on the Moon, let's play "Bullshit, or not?"...
>
> http://www.electronicdesign.com/microprocessors/ai-helps-amd-s-ryzen-take-intel
>(snip)
> So, what say you comp.arch, is this bullshit, or not? It sounds a lot like something that was implemented in marketing to me.

Not bullshit. Neural networks are commonly (but IMHO wrongly) called AI so building a branch predictor using neural networks means one can claim it uses AI to speed up execution. However other kinds of branch predictors also adapt to extracted execution patterns and so one could claim they also are examples of AI.

In my opinion extraction of patterns is an important part of but not sufficient in itself to making something intelligent. The lack of understanding of a problem means that there is no intelligence.

AFAIK AMD uses neural networks for other parts of the pipeline too however actual information of the design is hard to come by. AMD haven't (last I looked) even released basic optimization information for Ryzen.

Tapabrata Ghosh

unread,
Jun 11, 2017, 11:50:14 PM6/11/17
to
It's correct. They're using a perceptron as a branch predictor, which was pretty commonplace as the SOTA until TAGE came along.

Either Excavator or Bulldozer also used a perceptron predictor IIRC, so it probably continued over in the design.

The latest (2016?) branch prediction contest combined perceptrons and TAGE in order to hit branches that TAGE alone missed. This resulted in a new SOTA. I wouldn't put it past AMD to use something of the sort. Alternatively, perhaps they're using a two or three layer network?

Anton Ertl

unread,
Jun 12, 2017, 6:14:49 AM6/12/17
to
Tapabrata Ghosh <sixsamur...@gmail.com> writes:
>It's correct. They're using a perceptron as a branch predictor, which was p=
>retty commonplace as the SOTA until TAGE came along.=20

What does SOTA mean?

>The latest (2016?) branch prediction contest combined perceptrons and TAGE =
>in order to hit branches that TAGE alone missed. This resulted in a new SOT=
>A. I wouldn't put it past AMD to use something of the sort. Alternatively, =
>perhaps they're using a two or three layer network?

I just looked it up <https://www.jilp.org/cbp2016/program.html>.
Interesting stuff. It would be interesting to see how these academic
predictors compare to those we have in real hardware, but
unfortunately we cannot use the inputs they used on real hardware.
The input consists of traces containing data from many different
programs; the slides on that look interesting, too, but are not quite
comprehensible to me without the accompanying presentation.

- anton
--
M. Anton Ertl Some things have to be seen to be believed
an...@mips.complang.tuwien.ac.at Most things have to be believed to be seen
http://www.complang.tuwien.ac.at/anton/home.html

Megol

unread,
Jun 12, 2017, 7:51:02 AM6/12/17
to
On Monday, June 12, 2017 at 12:14:49 PM UTC+2, Anton Ertl wrote:
> Tapabrata Ghosh <sixsamur...@gmail.com> writes:
> >It's correct. They're using a perceptron as a branch predictor, which was p=
> >retty commonplace as the SOTA until TAGE came along.=20
>
> What does SOTA mean?

State Of The Art.

Quadibloc

unread,
Jun 12, 2017, 9:42:47 AM6/12/17
to
And that acronym was also used as the brand name for a line of turntables,
competing with others such as the Linn Sondek.

John Savard

Quadibloc

unread,
Jun 12, 2017, 9:44:28 AM6/12/17
to
Here is a page from their site:

http://www.sotaturntables.com/newtables/starnova.htm

John Savard

Joe Pfeiffer

unread,
Jun 13, 2017, 12:29:00 PM6/13/17
to
I remember seeing a tongue-in-cheek definition of AI once that included
"once you get it working, it isn't AI any more".

Tapabrata Ghosh

unread,
Jun 13, 2017, 12:32:02 PM6/13/17
to
Technically perceptrons are AI, but they're a very weak form of it.
But technically they are, so I guess.

EricP

unread,
Jun 15, 2017, 2:55:21 PM6/15/17
to
Things one can find when googling branch prediction,
from the dept. of "what goes around comes around":

Using Branch Predictors to Monitor Brain Activity
"... find that perceptron branch predictors can predict
cerebellar activity with accuracies as high as 85%"
https://arxiv.org/abs/1705.07887

Eric


Tapabrata Ghosh

unread,
Jun 15, 2017, 6:57:20 PM6/15/17
to
The new poster boy for "once it works, we stop calling it AI"
0 new messages