>AMD had a speech at CES yesterday, but for some reason coverage on the
>tech sites was somewhat limited.
>There were three exciting announcements.
>They announced their new lineup of chips for laptop computers. The higher-
>end chips in that line-up would include AI compute accelerators in addition
>to on-chip graphics.
>While this is exciting, if the AI compute accelerators are any good, why aren't
>there desktop chips with them?
>They announced the lower-speed and less-expensive members of the basic
>Ryzen line-up, with names without "X" at the end of them. But they wouldn't
>be locked against overclocking, just binned lower.
They have 100-200MHz lower turbo clocks. But given that Ryzen's can
turbo above the official turbo clock, it's not clear if this means
>Here, there are 8-core, 12-core, and 16-core X3D chips. But even though
>8-core chips have one CCX, and 12-core and 16-core chips have _two_ core
>complexes, _all_ the chips in that line-up have only _one_ cache die overlaid
>on the CPU, for 64 extra megabytes of cache.
>I was expecting 64 megabytes on the 8-core and 128 megabytes on the 12-
>core and 16-core, if they were to have X3D in all those sizes. Then, 16-core
>would have the same cache per core as 8-core, and 12-core would have even
But the 7800X3D has a turbo clock of 5GHz (compared to 5.4GHz for the
7700X). On the 7900X and 7950X the turbo of the CCD with extra cache
will likely be similarly limited. So by giving you one of each, you
can run the applications that benefit from the extra cache on the CCD
with extra cache, and let less cache-hungry applications benefit from
the higher clock at the other CCD.
The question is whether they manage to get these benefits with
automatic scheduling. I can see how it might work: let the thread
start on the CCD without extra cache; if the thread has a lot of L3
cache misses, migrate it to the other CCD. But the question is if and
how well this works in practice.
>In other news... while I'm sure that more advanced computers will help us
>deal with energy efficiency, climate change, and the food shortages caused
>by Russia's invasion of Ukraine...
Why would you think so?
I think that, as long as "more advanced" computers has a higher power
limit (which the latest high-end CPUs from Intel and AMD have), they
will help us consume energy faster. Any efficiency gains are eaten up
by giving the computers more to do (e.g., CI). Why? Because we can.
This will also help the climate change faster, and will help Russia in
sustaining its war against the Ukraine.
'Anyone trying for "industrial quality" ISA should avoid undefined behavior.'
Mitch Alsup, <c17fcd89-f024-40e7...@googlegroups.com