[osg-users] Multiple cameras with setRenderOrder(PRE_RENDER, i)

747 views
Skip to first unread message

Ross Bohner

unread,
Nov 20, 2009, 11:12:52 PM11/20/09
to osg-...@lists.openscenegraph.org
Hi,

I am having difficulty setting the order of multiple pre render cameras for RTT. Specifically I wish to set
1. a camera for rendering a depth buffer to a texture which is sent as a uniform to
2. another camera which preforms some more calculations to a texture which is used on the general scene root.

I set the first camera as a subnode to the second camera and sent both renderOrders to PRE_RENDER. However this structure does not correctly send output texture of the first camera to the second camera's shader. I have verified the shaders and first camera's output.

How does the render sequence get handled? I have read in other posts that the render order was based on the camera hierarchy. Did I misinterpret this and not have added the first camera as a child to the second camera?

Also how does the RenderOrderNum of a camera effect the order to processing? If the cameras are siblings my first interpretation was that the order is PRE_RENDER 0, PRE_RENDER 1, ..., PRE_RENDER n, NESTED_RENDER 0, NESTED_RENDER 1, ... NESTED_RENDER n, POST_RENDER 0, POST_RENDER 1, ... POST_RENDER n. However I did try setting the render order for the sequence above but the first camera's output texture was not available for processing by the second camera.

Any help would be greatly appreciated

Thank you!

Cheers,
Ross

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=20043#20043

_______________________________________________
osg-users mailing list
osg-...@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Paul Martz

unread,
Nov 21, 2009, 11:23:02 AM11/21/09
to osg-...@lists.openscenegraph.org
Hi Ross --

Nesting cameras alone doesn't cause the output of the first Camera to be
"sent" to the second Camera. The second Camera will need to bind the
texture that the first Camera rendered into. Is it possible that you
simply failed to do this?

For a really simple example of RTT, see the attached example. It uses
the Viewer Camera to render the scene to an FBO with a texture attached,
then uses a post render Camera to draw a full-window quad with the
texture applied. Take a look at how it wires up the texture between the
two Cameras -- as an attachment to the first Camera, and as a Texture in
the StateSet of the second Camera.

For information on how Camera rendering is handled, take a look at
CullVisitor::apply(Camera). During cull, a Camera node creates a
RenderStage in the render graph. Each RenderStage has a list of pre
children and post children. The Camera's render order determines whether
the new RenderStage is added to the pre or post list of the current
RenderStage. During draw, OSG traverses the graph of RenderStages by
recursively processing first the pre list, then the RenderStage, then
the post list.

Paul Martz
Skew Matrix Software LLC
_http://www.skew-matrix.com_ <http://www.skew-matrix.com/>
+1 303 859 9466

rtt.cpp

Robert Osfield

unread,
Nov 21, 2009, 11:24:05 AM11/21/09
to osg-...@lists.openscenegraph.org
Hi Ross,

Nested RTT Camera's that use PRE_RENDER will always be rendered prior
to the enclosing Camera's.

W.r,t RenderOrder, camera's are always rendered from -ive to +ive
values from within the same level of nesting.

In you case it sounds like you should avoid putting the pre render
camera above the one which you are trying to render, instead keep them
as siblings and use the RenderOrder to draw the pre render camera
first, nest the pre render camera inside the main camera.

Robert.

Ross Bohner

unread,
Nov 21, 2009, 7:03:01 PM11/21/09
to osg-...@lists.openscenegraph.org
Hi,

Thank you for the code snippet. Looking it over I see two differences from my implementation which have been giving me a headache. These are not necessarily related to camera render orders so it might be appropriate to send these as separate threads.

First difference from your code and mine is the lack of you needing to allocate an image to the render in order to place the texture within a stateset:


Code:

//... initializing texture, attached to camera'a color buffer, etc

[b]osg::Image* image = new osg::Image;
image->allocateImage(width, height, 1, GL_RGBA, GL_FLOAT);
_texture->setImage(0, image);[/b]

//... initializing geometry and stateset
stateset->setTextureAttributeAndModes(0, _texture, osg::StateAttribute::ON);


In my experience, this is required or else a seg fault occurs when the draw traversal references "_texture". I am confused on how you are able to pass the texture without allocating an image inside the texture. I have included a code snippet below to fully clarify what I see occurring.

The second difference is your code uses non power of 2 dimensions for your texture to be rendered by your camera. Whenever I have tried to do this I have run into the error :
"Warning: detected OpenGL error 'invalid value' after RenderBin::draw(,)" during every draw call to the geometry containing the texture

Here is my code snippet with the problematic line in bold

Code:

//Will need to set the texture size to the size of the viewport, it is not right now
_texture = new osg::Texture2D();
{
_texture->setTextureSize(width, height);
_texture->setInternalFormat(GL_RGBA);
_texture->setFilter(osg::Texture2D::MIN_FILTER,osg::Texture2D::NEAREST);
_texture->setFilter(osg::Texture2D::MAG_FILTER,osg::Texture2D::NEAREST);
}

//set up the camera
{
// set up the render to camera attributes.
setClearMask( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
setClearColor(osg::Vec4(0.0f,0.0f,0.0f,1.0f));

// set viewport
setViewport(0,0, width, height);
setRenderOrder(osg::Camera::PRE_RENDER);
setRenderTargetImplementation(osg::Camera::FRAME_BUFFER, osg::Camera::FRAME_BUFFER);

// attach the texture and use it as the color buffer. this will render the depth buffer to the texture.
//this should not be needed
//############################## Problem 1 ##########################
osg::Image* image = new osg::Image;
image->allocateImage(width, height, 1, GL_RGBA, GL_FLOAT);
_texture->setImage(0, image);
//############################## Problem 1 ##########################

//render the depth buffer to the texture
attach(osg::Camera::COLOR_BUFFER0, _texture, 0, 0, false, 8, 8);
}

osg::Geometry* polyGeom = new osg::Geometry();
polyGeom->setDataVariance( osg::Object::DYNAMIC );
polyGeom->setSupportsDisplayList(false);

float x0 = -5;
float y0 = 0;
float w = 10;
float h = 10;

osg::Vec3Array* verts = new osg::Vec3Array;
verts->push_back(osg::Vec3(x0, 0, y0));
verts->push_back(osg::Vec3(x0 + w, 0, y0));
verts->push_back(osg::Vec3(x0 + w, 0, y0 + h));
verts->push_back(osg::Vec3(x0, 0, y0 + h));
polyGeom->setVertexArray(verts);
polyGeom->addPrimitiveSet(new osg::DrawArrays(osg::PrimitiveSet::QUADS,0,verts->size()));

osg::Vec2Array* texcoords = new osg::Vec2Array;
texcoords->push_back(osg::Vec2(0, 0));
texcoords->push_back(osg::Vec2(1, 0));
texcoords->push_back(osg::Vec2(1, 1));
texcoords->push_back(osg::Vec2(0, 1));
polyGeom->setTexCoordArray(0,texcoords);

osg::Vec4Array* colors = new osg::Vec4Array;
colors->push_back(osg::Vec4(1.0f,1.0f,1.0f,1.0f));
polyGeom->setColorArray(colors);
polyGeom->setColorBinding(osg::Geometry::BIND_OVERALL);

osg::StateSet* stateset = new osg::StateSet;

//########################### problem 2 ############################
//without allocating an image to the texture this line will cause a seg fault
stateset->setTextureAttributeAndModes(0, _texture, osg::StateAttribute::ON);
//###############################################################
stateset->setMode(GL_LIGHTING,osg::StateAttribute::OFF);
polyGeom->setStateSet(stateset);

osg::Geode* geode = new osg::Geode();
geode->addDrawable(polyGeom);
return geode;

Thank you for taking time to go over the code and looking at this post

Cheers,
Ross

------------------
Read this topic online here:

http://forum.openscenegraph.org/viewtopic.php?p=20071#20071

Paul Martz

unread,
Nov 22, 2009, 4:36:38 PM11/22/09
to osg-...@lists.openscenegraph.org
Ross Bohner wrote:
> Hi,
>
> Thank you for the code snippet. Looking it over I see two differences from my implementation which have been giving me a headache. These are not necessarily related to camera render orders so it might be appropriate to send these as separate threads.
>
> First difference from your code and mine is the lack of you needing to allocate an image to the render in order to place the texture within a stateset:

Consider the following OpenGL call:
glTexImage2D(..., width, height, ..., NULL );
Note I'm passing in a width and height but no data. According to the
OpenGL spec, this allocates an undefined texture image with dimensions
width x height.

This is essentially what my posted example is causing to happen when I
create an osg::Texture2D with width and height, but no osg::Image data.
I don't care that the texture is undefined, because my app initializes
it in the first render pass.

> In my experience, this is required or else a seg fault occurs when the draw traversal references "_texture". I am confused on how you are able to pass the texture without allocating an image inside the texture. I have included a code snippet below to fully clarify what I see occurring.

Try my code; if it segfaults, let me know. :-)

> The second difference is your code uses non power of 2 dimensions for your texture to be rendered by your camera. Whenever I have tried to do this I have run into the error :
> "Warning: detected OpenGL error 'invalid value' after RenderBin::draw(,)" during every draw call to the geometry containing the texture

Wow. GLView shows that even the old GeForce 6xxx series supports the
NPOT extension. Do you have really old hardware, or perhaps you haven't
updated your driver recently?

Your code is incomplete so it's impossible to tell what you're doing.
You have a pre-render camera with a texture attached; you also have a
Geode to render a textured primitive. But I don't see how your code adds
them to a scene graph. Nor do I see anything added to your pre-render
camera, so as far as I can see there is nothing for it to render.

Have you tried compiling and running the code I sent you? How about the
osgprerender example? (osgprerender is a little complex, which is why I
wrote the more concise rtt.cpp example that I posted.)
-Paul

Ross Bohner

unread,
Nov 25, 2009, 2:49:22 PM11/25/09
to osg-...@lists.openscenegraph.org
Hi,

Paul,

I tried your code, then prerender.cpp, then parts of shadowMap.cpp of osgParticle. All failed, (seg fault when images were not attached to textures, and render fail when texture were not have dimentions of power of two), so there had to have been something wrong with the framework I was executing the code under.

some investigation found:

Code:

for( unsigned int i=0; i < texture.getNumImages(); ++i){
texture.getImage(i)->ensureValidSizeForTexturing( 134217728 );
}


which was forcing power of two dimensions on textures. Interesting though was that it was causing seg faults for texture which did not include images. I will be looking into this some more.

thank you for your help
...

Thank you!

Cheers,
Ross[[/code]

------------------
Read this topic online here:

http://forum.openscenegraph.org/viewtopic.php?p=20385#20385

Reply all
Reply to author
Forward
0 new messages