Hi,
On 3. May 2011, at 22:10, erij89 [via Software] wrote:
> I'm new here and I have a few general questions in order to figure out if Equalizer is applicable for what we want to use it for.
Welcome! The goal of Equalizer is to solve the *generic* problems of writing a parallel and potentially distributed OpenGL application. If it is not applicable, we should figure out why and change it. ;)
> We are trying to build an out-of-core rendering server (stateless!) that should be able to generate different GBuffers depending on the request (Query String). Those GBuffers are e.g. color, depth, objectID images of a particular view. The rendered image should be postprocessed - if needed.
> We are currently using osg for that and a pp-library that works directly on the imagedata on the gpu.
Just to make sure: The render server delivers the GBuffers to the client, not simply a rendered image - correct?
> So if we would split the image, so that a quarter of it would be rendered on a different gpu/pc we would not have the data of the pixels right next to the splitting area. Is there an easy way to enlargen the area each gpu renderes (so that there are enough pixels beyond the corners) and after that, put the image together?
From my current understanding I would first assemble the partial images on the destination, and then run the pp on it. My assumption here is that the pp does not reduce the amount of data. If it does, it might be worthwhile to do the pp on the source channels.
> Is there an easy way to use our existing osg code with shaders that genereate the pbuffers for every Node?
Yes, just do it. :)
Of course there might be details to work out, but in the draw task method you have native access to a GL context. Have a look at osgScaleViewer for OSG integration.
> The most important thing is - if splitting an image - if it would be possible to put it back together, even if there are 3-4 representations of the same image (color, depth, normals, objectid) ?
You would have to write your own implementation of Channel::frameReadback and Channel::frameAssemble to read and assemble the different representations using the correct source/destination, but this is hardly rocket science.
> We are using mutliple GB of Model files and Texture data. Can all Equalizer-Nodes that that data, so that it would not have to be loaded multiple times (if using one pc)?
One one PC you would only configure one node (process) with n pipe (threads), one for each GPU. The pipe threads naturally have access to shared data.
HTH,
Stefan.
View this message in context:
Re: IPP and GBuffer generation and usage
Sent from the
Equalizer - Parallel Rendering mailing list archive at Nabble.com.