Quadibloc <
jsa...@ecn.ab.ca> wrote:
>I wonder if this has any connection to the fact that the Kaby Lake i7
>processors are *dual-core* units, but with fancier on-chip GPU power
>(dual core with HyperThreading used to mean i3, so i7 has lost its
>meaning, the meaning it had right up through Skylake, even for laptop
>parts).
No, that's just Intel not launching the larger Kaby Lake's until they
have yields under control and/or they have enough production capacity,
exactly as they've done with several other launches before this.
Right now there are only low (U) and ultra-low (Y) power versions of
Kaby Lake and in those segment i7 has *ALWAYS* meant dual-core+HT,
just like the current Kaby Lake i7's.
So until higher-power Kaby Lake chips are available everyone uses
Skylake when they need the bigger/higher TDP parts, this is why Apple
just launched several Skylake laptops. We know there's a bunch of
desktop quad-core Kaby Lake parts coming in Q1, the date for the
higher power laptop models aren't known yet (but likely Q1 or even
Q2).
Yes, it sucks that i3/i5/i7 has no real meaning between the various
segments but that's been true since they made those names up. The same
term has always meant very different things on desktop, laptop (35W+)
and low/ultra-low power.
>All right... the current high-end is the Nvidia GeForce GTX 1080, and
>that has a TDP of 180 watts. Below 300 watts, below 200 watts
>slightly - but well above 100 watts.
AFAIK AMD's top-end is still Fury X with a TDP of 275W and their
second-rank card is the 390X also with a TDP of 275W.
And while you're correct about the GTX 1080 TDP, it isn't really
Nvidia's top-end, that's the Titan X Pascal with a 250W TDP. If AMD
had anything even remotely competetive to the 1080 there would be a
1080 ti card with 250-275W TDP... Because they don't need to market
them as such, they instead sell them as Titan X Pascal at much higher
markup for the AI & Heuristics markets but despite labelling it's
definitely the top-end Nvidia graphics card and is being used as such.
250W-275W has been the most common top-end graphics card TDP for
top-end cards for the last 5+ GPU generations and it's actually more
likely to exceed 275W than to go below 250W. If AMD Vega performs like
everyone expect, both sides top-end card will very likely be in that
250-2875W range "soon" again...
During that period there's also been have several graphics card at
300+W TDP, including one where they had to artificially limit it to
get it down to "only" 375W due to PCI-E slot limits so they could sell
it as a PCI-E card! But pretty much everyone who bought it flipped the
"go faster" switch on it where it has a ~425W TDP, if you didn't plan
to flip that switch you would have bought a different (cheaper) card...
I also need to point out that these are all official TDP for
manufacturer clocked cards. In this segement probably more than 50% of
all cards is sold "factory overclocked" which also implies higher or
much higher TDPs... 200W+ TDP isn't uncommon for many OC 1080's and I
suspect many OC 390X cards are in the 325W TDP range (Fury/FuryX is
much less likely to come with wild overclocks).
>But regular Intel products indeed have lower power - a recent Intel
>Extreme Edition CPU has a TDP of 140 watts.
Looks like Intel's current top-end is the Xeon E5-2679 v4 with a TDP
of 200W, then there's a few E5/E7 v3 and v4 models with 165W TDP.
Still, most Intel server cpus are in the 80-95W or 120-135W TDP band
(and then probably 65W). OTOH, dual-socket servers are relatively
common which doubles the CPU power usage (quad-socket is much, much
smaller market).
>I do remember a news item about Intel selling - with little publicity
>- special high-speed editions of its chips to a select clientele,
>such as automated stock market traders, that come at a premium price
>but which manage to finish their calculations before the other
>fellow's computer.
Intel will happily makes custom combination of number of cores,
frequency and turbo boost if you order enough chips, it's not exactly
announced but neither is really secret. Oracle, Amazon (Cloud) and
Azure (MS Cloud) are all known to have special editions and it's not
hard to find out how those are configured.
Oracle's version is probably the most technically interesting, AFAIK
the other known ones are just a core vs speed combinations that Intel
didn't think would sell enough while Oracle's version is "flexible" in
a way their normal chips aren't.
https://www.extremetech.com/computing/187055-intel-releases-rare-details-of-its-customized-oracle-cpus-and-there-a-lot-more-to-come
I assume the reason "normal" Intel chips doesn't offer this is because
it requires quite a bit of additional testing which cost money and
using it requires special modifications to the OS. If Intel senses a
wider need it may show up in wider deployment.
Oracle certainly has the margins on the product they choose to use it
in to pay Intel a significant premium for CPUs with this feature if
they deemed it worthwhile (which they apparently did).