I was wondering. Would it be possible to make a Mandelbrot fractal
rendering engine that uses GPGPU type techniques? With cards with 128
stream processors on them, couldn't one run dozens of pixels at once?
Would it be possible to implement some sort of bignum on there so that
one could do deep zooms? If so, would it offer any good performance
gain vs doing it all on the CPU? Because I was thinking of doing some
fractal deepzoom rendering at fairly high resolution (1280x720 pixels)
and was wondering if it would be possible to build an engine that
would leverage the power sitting inside the GPU to do it.
I stumbled once on a post somewhere that mentioned grim things about
bignum on GPU, something about saturated adds vs carries (bad), for
example. But is that really so and would it really be a problem? What
if one uses a fine-grain bignum, so the adds never overflow, e.g.
base-256 (8 bits/limb) or base 65536 (16 bits/limb)? Or would that
null out the performance gains?
Bignums on the GPU is generally very slow.. it's better to try some
mathematical tricks first
You can do that quite easily using OpenGL 2 and the technique was described
in detail in an OCaml Journal article:
http://ocamlnews.blogspot.com/2008/02/high-fidelity-graphics-with-opengl-2.html
--
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/?u
What would that be, anyway?