Eh? As far as I know, Intel have NEVER announced any plans for a
consumer version of Larrabee - it always was an experimental chip.
There was a chance that they would commoditise it, for experimental
purposes, but that didn't seem to pan out. Their current plans are
indicated here:
http://techresearch.intel.com/articles/Tera-Scale/1421.htm
They hope to have systems shortly, and to allow selected people
online access from mid-2010, so I would guess that the first ones
that could be bought would be in early 2011. If all goes well.
I have absolutely NO idea of where they are thinking of placing it,
or what scale of price they are considering.
Regards,
Nick Maclaren.
Nick, SCC and Larrabee are different species. Both have plenty of
relatively simple x86 cores on a single chips but that's about only
thing they have in common.
1. Larrabee cores are cache-coherent, SCC cores are not.
2. Larrabee interconnects have ring topology, SCC is a mesh
3. Larrabee cores are about vector performance (512-bit SIMD) and SMT
(4 hardware threads per core). SCC cores are supposed to be stronger
than Larrabee on scalar code and much much weaker on vector code.
4. Larrabee was originally intended for consumers, both as high-end 3D
graphics engine and as sort-of-GPGPU. Graphics as target for 1st
generation chip is canceled, but it still possible that it would be
shipped to paying customers as GPGPU. SCC, on the other hand, is
purely experimental.
Short article by David Kanter:
http://www.realworldtech.com/page.cfm?ArticleID=RWT120409180449
Sorry, I missed the latest round of news. In fact GPGPU is canceled
together with GPU. So now 45nm LRB is officially "a prototype".
http://www.anandtech.com/weblog/showpost.aspx?i=659
Thanks for the correction.
. I have been fully occupied with other matters, and
so seem to have missed some developments. Do you have a pointer
to any technical information?
>4. Larrabee was originally intended for consumers, both as high-end 3D
>graphics engine and as sort-of-GPGPU. Graphics as target for 1st
>generation chip is canceled, but it still possible that it would be
>shipped to paying customers as GPGPU. SCC, on the other hand, is
>purely experimental.
Now, there I beg to disagree. I have never seen anything reliable
indicating that Larrabee has ever been intended for consumers,
EXCEPT as a 'black-box' GPU programmed by 'Intel partners'. And
some of that information came from semi-authoritative sources in
Intel. Do you have a reference to an conflicting statement from
someone in Intel?
Regards,
Nick Maclaren.
I can guess.
Part of my guess is that this is related to Pat Gelsinger's departure.
Gelsinger was (a) ambitious, intent on becoming Intel CEO (said so in
his book), (b) publicly very much behind Larrabee.
I'm guessing that Gelsinger was trying to ride Larrabee as his ticket to
the next level of executive power. And when Larrabee did not pan out
as, Hicc well as he might have liked, he left. And/or conversely: when
Gelsinger left, Larrabee lost its biggest executive proponent. Although
my guess is that it was technology wagging the executive career tail: no
amount of executive positioning can make a technology shippable when it
isn't ready.
However, I would not count Larrabee out yet. Hiccups happen.
Although I remain an advocate of GPU style coherent threading
microarchitectures - I think they are likely to be more power efficient
than simple MIMD, whether SMT/HT or MCMT - the pull of X86 will be
powerful. Eventually we will have X86 MIMD/SMT/HT in-order vs X86 MCMT.
Hetero almost guaranteed. Only question will be heteroOOO/lO, or hetero
X86 MCMT/GPU. Could be hetero X86 OOO & X86 W. GPU style Coherent
Threading. The latter could even be CT/OOO. But these "Could be"s have
no sightings.
> Now, there I beg to disagree. I have never seen anything reliable
> indicating that Larrabee has ever been intended for consumers,
> EXCEPT as a 'black-box' GPU programmed by 'Intel partners'. And
> some of that information came from semi-authoritative sources in
> Intel. Do you have a reference to an conflicting statement from
> someone in Intel?
http://software.intel.com/en-us/blogs/2008/08/11/siggraph-larrabee-and-the-future-of-computing/
Just a blog, not official, although of course anything blogged at Intel
is semi-blest (believe me, I know the flip side.)
Does this mean Larrabee won't be the engine for the PS4?
We were assured that it was not long ago.
del
The blog post reminded me. I have assumed, for years, that Intel
planned on putting many (>>4) x86 cores on a single-die. I'm sure I
can find Intel presentations from the nineties that seem to make that
clear if I dig hard enough.
From the very beginning, Larrabee seemed to be a technology of destiny
in search of a mission, and the first, most obvious mission for any
kind of massive parallelism is graphics. Thus, Intel explaining why
it would introduce Larrabee at Siggraph always seemed a case of
offering an explanation where none would be needed if the explanation
weren't something they weren't sure they believed themselves (or that
anyone else would). It just seemed like the least implausible mission
for hardware that had been designed to a concept rather than to a
mission. A more plausible claim that they were aiming at HPC probably
wouldn't have seemed like a very attractive business proposition for a
company the size of Intel.
Also from the beginning, I wondered if Intel seriously expected to be
able to compete at the high end with dedicated graphics engines using
x86 cores. Either there was something about the technology I was
missing completely, it was just another Intel bluff, or the "x86"
cores that ultimately appeared on a graphics chips for market would be
to an x86 as we know it as, say, a, lady bug is to a dalmatian.
Robert.
I don't see anything in that that even hints at plans to make
Larrabee available for consumer use. It could just as well be a
probe to test consumer interest - something that even I do!
Regards,
Nick Maclaren.
Yes. But the word "planned" implies a degree of deliberate action
that I believe was absent. They assuredly blithered on about it,
and very probably had meetings about it ....
>From the very beginning, Larrabee seemed to be a technology of destiny
>in search of a mission, and the first, most obvious mission for any
>kind of massive parallelism is graphics. ...
Yes. But what they didn't seem to understand is that they should
have treated it as an experiment. I tried to persuade them that
they needed to make it widely available and cheap, so that the mad
hackers would start to play with it, and see what developed.
Perhaps nothing, but it wouldn't have been Intel's effort that was
wasted.
The same was true of Sun, but they had less margin for selling CPUs
at marginal cost.
Regards,
Nick Maclaren.
My guess is that Intel was pushing for Larrabee to be the PS4 chip.
And, possibly, Sony agreed. Not unreasonably, if Intel had made a
consumer grade Larrabee. Since Larrabee's nig pitch is programmability
- cache coherence, MIMD, vectors, familiar stuff. As opposed to the
Cell's idiosyncrasies and programmer hostility, which are probably in
large part to blame for Sony's lack of success with the PS3.
Given the present Larrabee situation, Sony is probably scrambling. Options:
a) go back to Cell.
b) more likely, eke out a year or so with Cell and a PS4 stretch, and
then look around again - possibly at the next Larrabee
c) AMD/ATI Fusion
d) Nvidia? Possibly with the CPU that Nvidia is widely rumored to be
working on.
AMD/ATI and Nvidia might seem the most reasonable, except that both
companies have had trouble delivering. AMD/ATI look best now, but
Nvidia has more "vision". Whatever good that will do them.
Larrabee's attractions remain valid. It is more programmer friendly.
But waiting until Larrabee is ready may be too painful.
Historically, game consoles have a longer lifetime than PCs. They were
programmed closer to the metal, and hence needed stability in order to
warrant software investment.
But DX10-DX11 and Open GL are *almost* good enough for games. And allow
migrating more frequently to the latest and greatest.
Blue-sky possibility: the PS3-PS4 transition breaking with the tradition
of console stability. The console might stay stable form factor and UI
and device wise - screen pixels, joysticks, etc. - but may start
changing the underlying compute and graphics engine more quickly than in
the best.
Related: net games.
> Although I remain an advocate of GPU style coherent threading
> microarchitectures - I think they are likely to be more power
> efficient than simple MIMD, whether SMT/HT or MCMT - the pull of X86
> will be powerful.
The main (only?) advantage of the x86 ISA is for running legacy software
(yes, I do consider Windows to be legacy software). And I don't see
this applying for Larrabee -- you can't exploit the parallelism when you
run dusty decks.
When developing new software, you want to use high-level languages and
don't really care too much about the underlying instruction set -- the
programming model you have to use (i.e., shared memory versus message
passing, SIMD vs. MIMD, etc.) is much more important, and that is
largely independent of the ISA.
Torben
Could be. That would be especially relevant if Sony were planning
to break out of the 'pure' games market and producing a 'home
entertainment centre'. Larrabee's pitch implied that it would have
been simple to add general Internet access, probably including VoIP,
and quite possibly online ordering, Email etc. We know that some of
the marketing organisations are salivating at the prospect of being
able to integrate games playing, television and online ordering.
I am pretty sure that both Sun and Intel decided against the end-user
market because they correctly deduced that it would not return a
profit but, in my opinion incorrectly, did not think that it might
open up new opportunities. But why Intel seem to have decided
against the use described above is a mystery - perhaps because, like
Motorola with the 88000 as a desktop chip, every potential partner
backed off. And perhaps for some other reason - or perhaps the
rumour of its demise is exaggerated - I don't know.
I heard some interesting reports about the 48 thread CPU yesterday,
incidentally. It's unclear that's any more focussed than Larrabee.
Regards,
Nick Maclaren.
> The main (only?) advantage of the x86 ISA is for running legacy software
> (yes, I do consider Windows to be legacy software). And I don't see
> this applying for Larrabee -- you can't exploit the parallelism when you
> run dusty decks.
But you can exploit the parallelism where you really needed it and carry
on using the dusty decks for all the other stuff, without which you don't
have a rounded product.
That was the theory. We don't know how well it would have panned out,
but it is clearly a sane objective.
Regards,
Nick Maclaren.
> Larrabee's pitch implied that it would have been simple to add general
> Internet access, probably including VoIP, and quite possibly online
> ordering, Email etc.
Why do you suggest that internet access, VoIP or online ordering are
impossible or even hard on existing Cell? It's a full-service Unix
engine, aside from all of the rendering business. Linux runs on it,
which means that all of the interesting browsers run on it just fine.
Sure, there's an advertising campaign (circa NetBurst) that says that
intel makes the internet work better, but we're not buying that, are we?
Cheers,
--
Andrew
Quite a lot of (indirect) feedback from people who have tried using
it, as well as the not-wholly-unrelated Blue Gene. The killer is that
it is conceptually different from 'mainstream' systems, and so each
major version of each product is likely to require extensive work,
possibly including reimplementation or the implementation of a new
piece of infrastructure. That's a long-term sink of effort.
As a trivial example of the sort of problem, a colleague of mine has
some systems with NFS-mounted directories, but where file locking is
disabled (for good reasons). Guess what broke at a system upgrade?
> Linux runs on it,
>which means that all of the interesting browsers run on it just fine.
It means nothing of the sort - even if you mean a fully-fledged system
environment by "Linux", and not just a kernel and surrounding features,
there are vast areas of problematic facilities that most browsers use
that are not needed for a reasonable version of Linux.
>Sure, there's an advertising campaign (circa NetBurst) that says that
>intel makes the internet work better, but we're not buying that, are we?
Of course not.
Regards,
Nick Maclaren.
>
>> Linux runs on it,
>> which means that all of the interesting browsers run on it just fine.
>
> It means nothing of the sort - even if you mean a fully-fledged system
> environment by "Linux", and not just a kernel and surrounding features,
> there are vast areas of problematic facilities that most browsers use
> that are not needed for a reasonable version of Linux.
>
For example ?.
Once you have an os kernel and drivers on top of the hardware, the hw is
essentially isolated and anything that can compile should run with few
problems. Ok, it may mean that the code runs on one of the n available
processors under the hood, but it should run...
Regards,
Chris
The obvious question then is: Would one of many x86 cores be fast enough
on it's own to run legacy windows code like office, photoshop etc ?...
Regards,
Chris
I wish that this were so.
I naively thought it were so, e.g. for big supercomputers. After all,
they compile all of their code from scratch, right? What do they care
if the actual parallel compute engines are non-x86? Maybe have an x86 in
the box, to run legacy stuff.
Unfortunately, they do care. It may not be the primary concern - after
all, they often compile their code from scratch. But, if not primary,
it is one of the first of the secondary concerns.
Reason: Tools. Ubiquity. Libraries. Applies just as much to Linux as to
Windows. You are running along fine on your non-x86 box, and then
realize that you want to use some open source library that has been
developed and tested mainly on x86. You compile from source, and there
are issues. All undoubtedly solvable, but NOT solved right away. So as
a result, you either can't use the latest and greatest library, or you
have to fix it.
Like I said, this was supercomputer customers telling me this. Not all
- but maybe 2/3rds. Also, especially, the supercomputer customers'
sysadmins.
Perhaps supercomputers are more legacy x86 sensitive than game consoles...
I almost believed this when I wrote it. And then I thought about flash:
... Than game consoles that want to start running living room
mediacenter applications. That want to start running things like x86
binary plugins, and Flash. Looking at
http://www.adobe.com/products/flashplayer/systemreqs/
The following minimum hardware configurations are recommended for
an optimal playback experience: ... all x86, + PowerPC G5.
I'm sure that you can get a version that runs on your non-x86,
non-PowerPC platform. ... But it's a hassle.
===
Since I would *like* to work on chips in the future as I have in the
past, and since I will never work at Intel or AMD again, I *want* to
believe that non-x86s can be successful. I think they can be
successful. But we should not fool ourselves: there are significant
obstacles, even in the most surprising market segments where x86
compatibility should not be that much of an issue.
We, the non-x86 forces of the world, need to recognize those obstacles,
and overcome them. Not deny their existence.
It's mainly a deal between the platform maker and Adobe. Consider another
market, where x86 is non-existent: Smartphones. They are now real
computers, and Flash is an issue. Solution: Adobe ports the Flash plugin
over to ARM, as well. They already have Flash 9.4 ported (runs on the Nokia
N900), and Flash 10 will get an ARM port soon, as well, and spread around to
more smartphones. Or Skype: Also necessary, also proprietary, but also
available on ARM. As long as the device maker cares, it's their hassle, not
the user's hassle (and even a "free software only" Netbook Ubuntu it's too
much of a hassle to install the Flash plugin to be considered fine for mere
mortals).
This of course would be much less of a problem if Flash wasn't something
proprietary from Adobe, but an open standard (or at least based on an open
source platform), like HTML.
Note however, that even for a console maker, backward compatibility to the
previous platform is an issue. Sony put the complete PS2 logic (packet into
a newer, smaller chip) on the first PS3 generation to allow people to play
PS2 games with their PS3. If they completely change architecture with the
PS4, will they do that again? Or are they now fed up with this problem, and
decide to go to x86, and be done with that recurring problem?
--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
I believe Cell was Sony's idea in the first place. I could be wrong
about that but it was sure the vibe at the time. And Sony's lateness
and high price was at least as much due to the Blue Ray drive
included, which did lead to them winning the DVD war
> Torben �gidius Mogensen wrote:
>> When developing new software, you want to use high-level languages and
>> don't really care too much about the underlying instruction set -- the
>> programming model you have to use (i.e., shared memory versus message
>> passing, SIMD vs. MIMD, etc.) is much more important, and that is
>> largely independent of the ISA.
> I naively thought it were so, e.g. for big supercomputers. After all,
> they compile all of their code from scratch, right? What do they care
> if the actual parallel compute engines are non-x86? Maybe have an x86
> in the box, to run legacy stuff.
>
> Unfortunately, they do care. It may not be the primary concern -
> after all, they often compile their code from scratch. But, if not
> primary, it is one of the first of the secondary concerns.
>
> Reason: Tools. Ubiquity. Libraries. Applies just as much to Linux as
> to Windows. You are running along fine on your non-x86 box, and then
> realize that you want to use some open source library that has been
> developed and tested mainly on x86. You compile from source, and
> there are issues. All undoubtedly solvable, but NOT solved right
> away. So as a result, you either can't use the latest and greatest
> library, or you have to fix it.
>
> Like I said, this was supercomputer customers telling me this. Not
> all - but maybe 2/3rds. Also, especially, the supercomputer
> customers' sysadmins.
Libraries are, of course, important to supercomputer users. But if they
are written in a high-level language and the new CPU uses the same
representation of floating-point numbers as the old (e.g., IEEE), they
should compile to the new platform. Sure, some low-level optimisations
may not apply, but if the new platform is a lot faster than the old,
that may not matter. And you can always address the optimisation issue
later.
Besides, until recently supercomputers were not mainly x86-based.
> Perhaps supercomputers are more legacy x86 sensitive than game consoles...
>
> I almost believed this when I wrote it. And then I thought about flash:
>
> ... Than game consoles that want to start running living room
> mediacenter applications. That want to start running things like x86
> binary plugins, and Flash. Looking at
>
> http://www.adobe.com/products/flashplayer/systemreqs/
>
> The following minimum hardware configurations are recommended for
> an optimal playback experience: ... all x86, + PowerPC G5.
>
> I'm sure that you can get a version that runs on your non-x86,
> non-PowerPC platform. ... But it's a hassle.
Flash is available on ARM too. And if another platform becomes popular,
Adobe will port Flash to this too. But that is not the issue: Flash
doesn't run on the graphics processor, it runs on the main CPU, though
it may use the graphics processor through a standard API that hides the
details of the GPU ISA.
Torben
Grrk. All of the above is partially true, but only partially. The
problem is almost entirely with poor-quality software (which is,
regrettably, most of it). Good quality software is portable to
quite wildly different systems fairly easily. It depends on whether
you are talking about performance-critical, numerical libraries
(i.e. what supercomputer users really want to do) or administrative
and miscellaneous software.
For the former, the representation isn't enough, as subtle differences
like hard/soft underflow and exception handling matter, too. And you
CAN'T disable optimisation for supercomputers, because you can't
accept the factor of 10+ degradation. It doesn't help, anyway,
because you will be comparing with an optimised version on the
other systems.
With the latter, porting is usually trivial, provided that the
program has not been rendered non-portable by the use of autoconfigure,
and that it doesn't use the more ghastly parts of the infrastructure.
But most applications that do rely on those areas aren't relevant
to supercomputers, anyway, because they are concentrated around
the GUI area (and, yes, flash is a good example).
I spent a decade managing the second-largest supercomputer in UK
academia, incidentally, and some of the systems I managed were
'interesting'.
>Besides, until recently supercomputers were not mainly x86-based.
>
>> Perhaps supercomputers are more legacy x86 sensitive than game consoles...
Much less so.
> Sure, some low-level optimisations
> may not apply, but if the new platform is a lot faster than the old,
> that may not matter. And you can always address the optimisation issue
> later.
I don't think Andy was talking about poor optimisation. Perhaps these
libraries have assumed the fairly strong memory ordering model of an x86,
and in its absence would be chock full of bugs.
> Flash is available on ARM too. And if another platform becomes popular,
> Adobe will port Flash to this too.
When hell freezes over. It took Adobe *years* to get around to porting
Flash to x64.
They had 32-bit versions for Linux and Windows for quite a while, but no
64-bit version for either. To me, that suggests the problem was the
int-size rather than the platform, and it just took several years to clean
it up sufficiently. So I suppose it is *possible* that the next port might
not take so long. On the other hand, both of these targets have Intel's
memory model, so I'd be surprised if even this "clean" version was truly
portable.
> The obvious question then is: Would one of many x86 cores be fast enough
> on it's own to run legacy windows code like office, photoshop etc ?...
Almost certainly. From my own experience, Office 2007 is perfectly usable
on a 2GHz Pentium 4 and only slightly sluggish on a 1GHz Pentium 3. These
applications are already "lightly multi-threaded", so some of the
longer-running operations are spun off on background threads, so if you
had 2 or 3 cores that were even slower, that would probably still be OK
because the application *would* divide the workload. For screen drawing,
the OS plays a similar trick.
I would also imagine that Photoshop had enough embarrassing parallelism
that even legacy versions might run faster on a lot of slow cores, but I'm
definitely guessing here.
> This of course would be much less of a problem if Flash wasn't something
> proprietary from Adobe [...]
A relevant article:
Free Flash community reacts to Adobe Open Screen Project
http://www.openmedianow.org/?q=node/21
It's just a question of market share.
Contrary to Free Software where any idiot can port the code to his
platform if he so wishes, propretary software first requires collecting
a large number of idiots so as to justify
compiling/testing/marketing/distributing the port.
Stefan
From an outside perspective, this sounds a lot like the Itanic roadmap:
announce something brilliant and so far out there that your competitors
believe you must have solutions to all the showstoppers up your sleeve.
Major difference being that Larrabee's potential/probable competitors
didn't fold.
paul
>
> Libraries are, of course, important to supercomputer users. But if they
> are written in a high-level language and the new CPU uses the same
> representation of floating-point numbers as the old (e.g., IEEE), they
> should compile to the new platform. Sure, some low-level optimisations
> may not apply, but if the new platform is a lot faster than the old,
> that may not matter. And you can always address the optimisation issue
> later.
>
But if some clever c programmer or committee of c programmers has made
a convoluted and idiosyncratic change to a definition in a header
file, you may have to unscramble all kinds of stuff hidden under
macros just to get it to compile and link, and that effort can't be
deferred until later.
Robert.
> From an outside perspective, this sounds a lot like the Itanic roadmap:
> announce something brilliant and so far out there that your competitors
> believe you must have solutions to all the showstoppers up your sleeve.
> Major difference being that Larrabee's potential/probable competitors
> didn't fold.
In American football, "A good quarterback can freeze the opposition’s
defensive secondary with a play-action move, a pump fake or even his
eyes."
where the analogy is used in a political context.
If I were *any* of the players in this game, I'd be studying the
tactics of quarterbacks who need time to find an open receiver, since
*no one* appears to have the right product ready for prime time. If I
were Intel, I'd be nervous, but if I were any of the other players,
I'd be nervous, too.
Nvidia stock has drooped a bit after the *big* bounce it took on the
Larrabee announcement, but I'm not sure why everyone is so negative on
Nvidia (especially Andy). They don't appear to be in much more
parlous a position than anyone else. If Fermi is a real product, even
if only at a ruinous price, there will be buyers.
N.B. I follow the financial markets for information only. I am not an
active investor.
Robert.
Let me be clear: I'm not negative on Nvidia. I think their GPUs are the
most elegant of the lot. If anything, I am overcompensating: within
Intel, I was probably the biggest advocate of Nvidia style
microarchitecture, arguing against a lot of guys who came to Intel from
ATI. Also on this newsgroup.
However, I don't think that anyone can deny that Nvidia had some
execution problems recently. For their sake, I hope that they have
overcome them.
Also, AMD/ATI definitely overtook Nvidia. I think that Nvidia
emphasized elegance, and GP GPU futures stuff, whereas ATI went the
slightly inelegant way of combining SIMT Coherent Threading with VLIW.
It sounds more elegant when you phrase it my way, "combining SIMT
Coherent Threading with VLIW", than when you have to describe it without
my terminology. Anyway, ATI definitely had a performance per transistor
advantage. I suspect they will continue to have such an advantage over
Fermi, because, after all, VLIW works to some limited extent.
I think Fermi is more programmable and more general purpose, while ATI's
VLIW approach has efficiencies in some areas.
I think that Nvidia absolutely has to have a CPU to have a chance of
competing. One measly ARM chip or Power PC on an Nvidia die. Or maybe
one CPU chip, one GPU chip, and a stack of memory in a package; or a GPU
plus a memory interface with a lousy CPU. Or, heck, a reasonably
efficient way of decoupling one of Nvidia's processors and running 1
thread, non-SIMT, of scalar code. SIMT is great, but there is important
non-SIMT scalar code.
Ultimately, the CPU vendors will squeeze GPU-only vendors out of the
market. AMD & ATI are already combined. If Intel's Larrabee is
stalled, it gives Nvidia some breathing room, bit not much. Even if
Larrabee is completely cancelled, which I doubt, Intel would eventually
squeeze Nvidia out with its evolving integrated graphics. Which,
although widely dissed, really has a lot of potential.
Nvidia's best chance is if Intel thrashes, dithering between Larrabee
and Intel's integrated graphics and ... isn't Intel using PowerVR in
some Atom chips? I.e. Intel currently has at least 3 GPU solutions in
flight. *This* sounds like the sort of thrash Intel had -
x86/i960/i860 ... I personally think that Intel's best path to success
would be to go with a big core + the Intel integrated graphics GPU,
evolved, and then jump to Larrabee. But if they focus on Larrabee, or
an array of Atoms + a big core, their success will just be delayed.
Intel is its own biggest problem, with thrashing.
Meanwhile, AMD/ATI are in the best position. I don't necessarily like
Fusion CPU/GPU, but they have all the pieces. But it's not clear they
know how to use it.
And Nvidia needs to get out of the discrete graphics board market niche
as soon as possible. If they can do so, I bet on Nvidia.
> And Nvidia needs to get out of the discrete graphics board market niche
> as soon as possible. If they can do so, I bet on Nvidia.
Cringely thinks, well, the link says it all:
http://www.cringely.com/2009/12/intel-will-buy-nvidia/
Robert.