| Graphics applications like 3dmax don't work well in EC2. The reasons are - Both the HW Amazon uses for running EC2 services and the SW (xen virtual client) are not designed for running FPU intensive applications such as graphics. Those enterprise servers have very poor FPU performance. They are simply optimized for database applications and web servers that typically don't require high FPU processing power. The servers running your EC2 instance simply don't have much built-in graphics processing capabilty except a simple primitive integrated video controller shared maybe by 8 - 16 CPU cores. - All virtualization solutions out in the market including Xen as well have very poor graphics processing performance. You could blame Nvidia/AMD for not making their device drivers open sourced, Mircofost for its intended monopoly on graphics API, incomptence of OpenGL ARB Working Group. Anyway, virtualized 3D graphics still has a long way to go. Even someone may point out to you some virtual machines with graphics acceleration support. You will be disappointed at its performance. All of them only support OpenGL API 1.3 at most and have zero D3D support. This is just a reality. thanks - larry --- On Mon, 3/23/09, rick.dane...@gmail.com <rick.dane...@gmail.com> wrote: |
|
The problem with graphics is that is very hardware intensive (both at
the cpu and the gpu level) and needs to move large amounts of data
between the OS and the graphics card at high speed. This is exactly the
sort of thing that virtualization isn't great at.
> I am wondering if anyone knows anything about this sort of thing and
> if you know of any possible solutions. I would strongly prefer to use
> a cloud set-up rather than buying the hardware myself. I have thought
> that maybe it is somehow possible to make use of a local graphics card
> over a high speed internet connection but I have no idea how to even
> approach this.
Offloading the graphics to a local GPU will work well for some graphics
intensive applications, but when you have to move large amounts of data
(say texture maps) between the OS and the graphics card, you are in
trouble. Consider, a high-speed internet connection might offer a few
Mbit/s of bandwidth with noticeable latency, whereas the PCIe bus offers
GB/s of bandwidth with very low latency.
>
> I would greatly appreciate any input from any angle on this, I am
> hoing I am not the only person out there trying to do this sort of
> thing.
>
I suspect that the long term solution may be technologies like
single-root I/O virtualization so that hypervisor can get of the way of
graphics processing. But as yet, I've not heard of anyone building an
SR-IOV capable graphics card. This may change as desktop virtualization
becomes more important in the enterprise, though today, that is
typically aimed at desktops that don't require high-performance graphics.
--
Nik Simpson
Unfortunately, the graphics cards you see in high-end desktops use PCIe
x16 slots, which you don't find on servers, also they are power hogs and
physically quite large. So they present major challenges in terms of
slot availability, power, and cooling for typical 1U/2U servers used in
cloud applications.
--
Nik Simpson
The trouble is that what constitues a "graphics intensive" application
is a moving target. When I joined Intergraph in the mid-80s, 2D cad was
graphics intensive, yet today you can buy a PC with enough horsepower
(both graphics and compute) to extensive realtime rendering of 3D CAD
models. Same goes for image processing, my camera today produces images
that are substantially larger (in terms of memory footprint) than the
max physical memory size of VAX machines we were using back then.
So, I expect that as fast as we solve the bandwidth problems for today's
graphics intensive apps, new apps will emerge with even greater demands
on CPU/GPU/memory size/bandwidth etc will emerge. For example, today,
engineers offload thinks computational fluid modeling as batch jobs, but
I would expect in the future that engineering CAD applications will be
running these in real time.
For myself, I believe there will always be a class of applications that
will not fit the cloud model very well, and that availability of local
low-cost, high-performance compute and graphics will always be required.
--
Nik Simpson