[osg-users] Passing array of data to vertex shader - uniform array?

224 views
Skip to first unread message

Jean-Sébastien Guay

unread,
Feb 17, 2011, 2:52:24 PM2/17/11
to OpenSceneGraph Users
Hi all,

I'm trying to draw some instanced geometry, passing a transform per
instance. I've set up a uniform array for each node I'm instancing,
which has a fixed number of values (I'm using 128 now) and that works well.

The problem is that this way of passing data seems limited to a small
number of matrices. As I said I'm using a size of 128, so 128*16 floats,
which doesn't seem like much (8192 bytes of memory). I want to increase
that (my initial goal was about 1024 instances). But if I increase this
just to 256, I get

Warning: detected OpenGL error 'invalid operation' after RenderBin::draw(,)

repeatedly on the console, and my instances don't display anything
(presumably the matrices it's getting are all zeros).

This is to be used to display small rocks as particles, so I really need
more than 128 of each rock type. One solution might be to increase the
number of rock types (so the number of drawables that will be
instanced), but I would prefer another way.

One thing I could see to increase the number of instances I can use is
to pass less data per instance, for example a vec3 translation, a float
uniform scale and a vec4 quaternion which gives 8 floats instead of 16
per instance. Would this work, i.e. am I right in my assumption that the
problem is the amount of data and not the number of elements in the array?

Another solution might be a texture. In a float texture, I could pass
(if I use the same members above, so 8 floats per instance) over 2
million instances in a 4096x4096 texture. So to get to my initial goal
of 1024 instances, I would need a 128x64 float texture, which seems
manageable, and I could potentially go much higher if I wanted to.

I've seen an example (osguniformbuffer) that used uniform buffer arrays,
I'm not sure if this would apply to what I'm doing but the usage seems
more complicated than a uniform mat4 array, so as long as the
performance difference isn't too large I'd prefer to use those.

Are there other ways to pass lots of data in an array to a vertex shader
that I haven't thought about? I've thought of the above options, so what
I'm asking is what other people have been using and what has worked for
them. Right now the float texture seems like the best option (especially
since the hardware we target seems fast at vertex texture fetch, which
we use for display of height fields).

Thanks in advance,

J-S
--
______________________________________________________
Jean-Sebastien Guay jean-seba...@cm-labs.com
http://www.cm-labs.com/
http://whitestar02.webhop.org/
_______________________________________________
osg-users mailing list
osg-...@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Sergey Polischuk

unread,
Feb 17, 2011, 5:28:47 PM2/17/11
to OpenSceneGraph Users
Hi, J-S
Uniform arrays maximum size is implementation dependent and about 4k-16k(+) floats in general. I personally used plain 1D\2D textures to store data for instancing. Another options as you mentioned is uniform buffers, textures and texture buffer objects. Uniform buffers max size is quite small - about same 16k(+) floats as uniform arrays, difference only in usage (you dont need to transfer uniforms values to gpu, you just switch buffer). Uniform buffers works faster than textures esp. with consecutive access pattern (as it will be with instancing) so if you can fit your data in those i think it's best option, if it just doesnt fit - use plain textures for static data, and texture buffers for dynamic data.

Cheers,
Sergey.

17.02.2011, 22:52, "Jean-Sébastien Guay" <jean-seba...@cm-labs.com>:

Sergey Polischuk

unread,
Feb 17, 2011, 5:42:43 PM2/17/11
to OpenSceneGraph Users
Hi again,

Wrote it wrong with sizes actually, uniform arrays in general goes in range of 512 - 4k floats max, and these limits counts towards size of all uniforms used in program summed, not each one of them individually. And uniform buffers should be at least 16k floats max if supported.

Cheers,
Sergey.

18.02.2011, 01:28, "Sergey Polischuk" <pol...@yandex.ru>:

Jean-Sébastien Guay

unread,
Feb 17, 2011, 10:57:50 PM2/17/11
to OpenSceneGraph Users
Hi Sergey,

Thanks for your guidance. The data I need per drawable might fit in 16k
floats per drawable. I need a transform and a color, but the transform
will always have a uniform scale, so that's:

- 3 floats for translation
- 4 floats for rotation quaternion
- 1 float for uniform scale
- 4 floats for color

= 12 floats.

I could probably easily pack the color into one 32-bit integer (8 bits
each for R, G, B, A) and then extract the colors into a vec4 in the
shader, so that would save an extra 3 floats.

So without doing anything really esoteric like packing multiple floats
into one, my data would fit in 9 floats. So that would be about 1777
instances' worth of data, if I have access to 16k floats.

When you mentioned using texture buffers for dynamic data, what does
that mean? I would have just used a regular float texture which I fill
with the float data each frame. What are texture buffers and how do they
differ from this? Is there an OSG example that shows how to use them?

Thanks again,

Sergey Polischuk

unread,
Feb 18, 2011, 5:01:38 AM2/18/11
to OpenSceneGraph Users
Hi J-S.

There one more point about available data size if you are using uniform arrays or uniform buffers:
you should pack your data into vec4's if it possible, cause uniforms counts towards max size with some kind of slots, with one slot taking storage size of vec4; if you declare your uniform as "uniform float[size] a;" it can takes "size" slots therefore wasting 3*size floats of memory, same with all types that less components than 4. Its ok with 4x4 matrices, and with 3x3 matrices you waste 3 floats per matrix. If you use uniform blocks - there are same thing to some extent, as data inside block is aligned and padded.

About texture buffers:
If you generate data for your instances on cpu side there are no point in using texture buffers actually. They pretty much same as 1D textures, except there are no filtering in them, they can fit more data and u access them in shaders with integer index (with texelFetch). They are useful if you generate data on gpu side with transform feedback, and you need more storage than uniform buffers. You still can use plain textures in latter case, just writing to your texture with fragment shaders instead of using transform feedback, tho this is a bit tricky.

Cheers,
Sergey.

18.02.2011, 06:57, "Jean-Sébastien Guay" <jean-seba...@cm-labs.com>:

Glenn Waldron

unread,
Feb 18, 2011, 9:00:22 AM2/18/11
to OpenSceneGraph Users
J-S,

Another uniform array snafu I ran into is this: different drivers may report the uniform location differently for arrays. Specifically: the ATI windows driver reports the uniform array location at "array[0]" and on NVIDIA it is reported as "array". Both are valid, according to the spec. But OSG believe only supports the latter last time I checked. I had to introduce a helper class to account for this.

This may not be relevant to your problem but it's a good thing to be aware of anyway when using arrays..
 
Glenn Waldron : Pelican Mapping : +1.703.652.4791

Jean-Sébastien Guay

unread,
Feb 18, 2011, 9:26:17 AM2/18/11
to OpenSceneGraph Users
Hi Sergey,

> There one more point about available data size if you are using uniform arrays or uniform buffers:
> you should pack your data into vec4's if it possible, cause uniforms counts towards max size with some kind of slots, with one slot taking storage size of vec4; if you declare your uniform as "uniform float[size] a;" it can takes "size" slots therefore wasting 3*size floats of memory, same with all types that less components than 4. Its ok with 4x4 matrices, and with 3x3 matrices you waste 3 floats per matrix. If you use uniform blocks - there are same thing to some extent, as data inside block is aligned and padded.
>
> About texture buffers:
> If you generate data for your instances on cpu side there are no point in using texture buffers actually. They pretty much same as 1D textures, except there are no filtering in them, they can fit more data and u access them in shaders with integer index (with texelFetch). They are useful if you generate data on gpu side with transform feedback, and you need more storage than uniform buffers. You still can use plain textures in latter case, just writing to your texture with fragment shaders instead of using transform feedback, tho this is a bit tricky.

Thanks for all the info, it's very useful. I'll probably use a texture
to pass the data.

Jean-Sébastien Guay

unread,
Feb 18, 2011, 9:29:19 AM2/18/11
to OpenSceneGraph Users
Hi Glenn,

> This may not be relevant to your problem but it's a good thing to be
> aware of anyway when using arrays..

Definitely, thanks for the info.

Reply all
Reply to author
Forward
0 new messages