Best path for OpenSubdiv integration

530 views
Skip to first unread message

Colin Doncaster

unread,
Aug 29, 2013, 11:56:23 AM8/29/13
to opens...@googlegroups.com
Hi there - 

Thank you Pixar for sharing OpenSubdiv!

We're currently evaluating how we can use OpenSubdiv in a toolset that already uses our own mesh pipeline that's based on triangles for various reasons.  What we've come up with is

a) Build an OpenSubdiv representation of the mesh when needed and use it to evaluate point on a limit surface, this is less refactoring on our end but also leads us down the path of trying to come up with the best way of saying "I have this point on this triangle and need to derive where it sits on the limit surface".  Having spent a few days with the OpenSubdiv code and speaking with a few folks familiar with it, this isn't trivial as retaining any sort of relationship between the input triangle mesh and the resulting subdivision surface is hard.

b) Completely refactoring our geometry pipeline and replace our TriMesh implementation with OpenSubdiv.  This is a much bigger task (though feels like the correct one) though there are a few unknowns.  
     - we have to pick random points on the geometry, we do this now by picking a random triangle and barycentric coordinates, with OpenSubdiv I assume this would be picking a random patch at some subdivided level and picking the random s & t?
     - what would be the "correct" way of deriving the surface area of a patch?
     - there are some cases where we might want to actually match the "flat" triangle surface vs. a smooth subdivided surface and we'd like to do this with the same code path.  Can we force all edges in the input mesh to be hard thus effectively resulting in a surface that would match a flat polygonal model?

Any thoughts on either of these paths would be appreciated.

Lastly, is there any support for closestPoint or rayhit testing in the API anywhere?  Or is left as an exercise for the user?

All the best, 
Colin

manuelk

unread,
Aug 30, 2013, 1:24:16 PM8/30/13
to opens...@googlegroups.com
Hi Colin - long time no see !




a) Build an OpenSubdiv representation of the mesh when needed and use it to evaluate point on a limit surface, this is less refactoring on our end but also leads us down the path of trying to come up with the best way of saying "I have this point on this triangle and need to derive where it sits on the limit surface".  Having spent a few days with the OpenSubdiv code and speaking with a few folks familiar with it, this isn't trivial as retaining any sort of relationship between the input triangle mesh and the resulting subdivision surface is hard.

When you say "point on a triangle", i assume you mean a point at an arbitrary location within the triangle, not one of the vertices, right ?

Couple of thoughts about triangle-only meshes:
  • Catmark is fairly inefficient because each triangle has to generate at least 3 quads before any limit can be computed
  • Loop would be a much more natural choice, however we have not implemented the bi-cubic kernels kernels yet (neither Draw nor Eval)
The kind of use-cases of limit evaluation that you describe usually come in 2 main flavors:
  1. Homogeneous distributions (grass, hair):
    Our current "greedy" generation of patches that are evaluated in Eval is probably the best solution all around. With this solution, Eval maintains a quad-tree as a way of connecting a (faceid / u / v) to the limit patches.


  2. Sparse distributions (ex. cloth belt against dress)
    ideally you want a "lazy" generation of stencils, which is something that I am currently working on. By their very nature, stencils are tied to the original location of the surface that they were generated for.

 
     - we have to pick random points on the geometry, we do this now by picking a random triangle and barycentric coordinates, with OpenSubdiv I assume this would be picking a random patch at some subdivided level and picking the random s & t?

Converting the barycentric coordinates to and from ptex coordinates shouldn't be too difficult and that should work - right ?

     - what would be the "correct" way of deriving the surface area of a patch?

You can probably get a pretty good approximation by summing areas of tessellated tris / quads... But since each face can be broken up into a collection of bi-cubic patches, i am guessing that some kind of contour integral should exist for each patch (although i would probably isolate Gregory patches to a high level and treat them as bi-linear...)
 
     - there are some cases where we might want to actually match the "flat" triangle surface vs. a smooth subdivided surface and we'd like to do this with the same code path.  Can we force all edges in the input mesh to be hard thus effectively resulting in a surface that would match a flat polygonal model?

Sticking infinitely sharp creases on each edge is very inefficient... But Hbr & Far both support a "bilinear" subdivision scheme ?
 
Lastly, is there any support for closestPoint or rayhit testing in the API anywhere?  Or is left as an exercise for the user?
 
Projection and intersection are not implemented yet. We have some internal code but it's not quite ready for prime time yet as the algorithms only produce approximations and tend to be slow and don't thread well. IMHO both problems are still research topics...

Doug Epps

unread,
Sep 3, 2013, 4:27:04 PM9/3/13
to manuelk, opens...@googlegroups.com
Hi Colin,

To recap (as best I understand it):

currently you tesselate ( or subdivide and push to limit ?) your subd into triangles.   You then use those triangles to generate random points on the surface.  At each point you get P, N, etc. to do hair-growth or whatever.

the minimal change to your code would be to use the eval API to do your tessellation, then proceed as you do now.  You could change your mesh to bi-linear to get the faceted model when needed.

Does that make sense ?

I.e. you don't want to hand your triangles to Hbr, as that will get you a surface that's way different from the limit surface you're seeing (or will see) in maya from the original control-mesh.



--
You received this message because you are subscribed to the Google Groups "OpenSubdiv Forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to opensubdiv+...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Colin Doncaster

unread,
Sep 9, 2013, 9:27:06 AM9/9/13
to Doug Epps, manuelk, opens...@googlegroups.com
Hello Manuel and Doug, 

Thank you for the insights - we’ll head down this route with the plan of integrating Open Subdiv better in the future.  I’m sure we’ll have a few more questions as we go. 

All the best, 
Colin 

You received this message because you are subscribed to a topic in the Google Groups "OpenSubdiv Forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/opensubdiv/jqdvqSKioMU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to opensubdiv+...@googlegroups.com.

Colin Doncaster

unread,
Sep 10, 2013, 5:15:07 PM9/10/13
to Doug Epps, manuelk, opens...@googlegroups.com

Hello all - 

So, without sounding too dim.  How do I get the resulting faces and vertices (and normals etc ) back form the farmesh/meshfactoy once it’s been refined?

The examples seem to all be a one way street from having the geometry to evaluation, tessellation and drawing in whatever context specified.  

Thank you!

You received this message because you are subscribed to a topic in the Google Groups "OpenSubdiv Forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/opensubdiv/jqdvqSKioMU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to opensubdiv+...@googlegroups.com.

Rob Pieke

unread,
Sep 10, 2013, 6:33:34 PM9/10/13
to Colin Doncaster, Doug Epps, manuelk, opens...@googlegroups.com
> So, without sounding too dim.  How do I get the resulting faces and vertices (and normals etc ) back form the farmesh/meshfactoy once it’s been refined?
> The examples seem to all be a one way street from having the geometry to evaluation, tessellation and drawing in whatever context specified.  

Yes, this was definitely something we noticed too!

I'll send some code tomorrow. Once you figure out which APIs to use, it's not too outrageously complicated :)

Cheers!

- Rob

Doug Epps

unread,
Sep 10, 2013, 5:27:37 PM9/10/13
to Colin Doncaster, manuelk, opens...@googlegroups.com

Are you looking for the points at vertex locations ?  or arbitrary spots ?

If it's arbitrary spots, I've got a tar to send you that's basically examples/limitEval but without openGL calls.  

If it's the vertices pushed to the limits, it's something in examples/simpleCpu/ but it's not super-clear what's going on because of the openGL.  

If you need I can try to grok it (it's been a little while since I've looked in there).

-doug

Colin Doncaster

unread,
Sep 10, 2013, 5:37:54 PM9/10/13
to Doug Epps, manuelk, opens...@googlegroups.com
Thanks Doug, 

I guess I'm looking for a face count, vertices and per face vertex indices at a specific level as you'd have with, say, maya.

Or, everything I'd need to render/draw the surface if it was an unsupported target.

Cheers


Doug Epps

unread,
Sep 11, 2013, 12:42:54 AM9/11/13
to Rob Pieke, Colin Doncaster, manuelk, opens...@googlegroups.com
Hmm.

Earlier today I had confidently told Colin, "I know I've done this, I'll dig up the code tonight".  Apparently I did this long enough ago that the code I have doesn't work anymore (so presumably pre 1.0 release ?).  It used farMesh->Subdivide (), which apparently doesn't exist anymore ?    

Rob, I'm hoping you have something cool for Colin.  Otherwise I'll need to spelunk through the examples and make a simple one (I've got one for the evalAPI that I need to make  a pull request for).

Rob, even if all you have are some code-fragments, I'd love to grab them and turn them into a single-file example.



-doug

Rob Pieke

unread,
Sep 11, 2013, 4:15:20 AM9/11/13
to Doug Epps, Colin Doncaster, manuelk, opens...@googlegroups.com
Right ... so it's good Colin made this request since, after much grumbling of my own about the lack of such an example, I promised Manuel at SIGGRAPH that I'd put one together (and obviously never did).

A lot of this will be hand-wavy (with probably a few typos), but just ask for more details if you need them:

Our use-case is, I expect, almost identical to Colin's. Take a mesh with some data (vertex positions, face vertex UVs, face UDIMs, etc), subdivide it and all the data with it, and extract a new mesh with new data. I'll preface this whole email by saying the Pixar guys were very patient with my rather opinionated (blunt) views about the complexities of this API (especially from a self-study point of view), but the results have been awesome. Our internal subdivision code was _much_ slower, and didn't have any support for creases.

Okay, here we go ...

Create an OsdHbrMesh (OpenSubdiv::HbrMesh<OpenSubdiv::OsdVertex>). At a minimum, pass in whether it's catmull-clark or loop. If you're dealing with face-varying data, this is also the opportunity to pass in info about the data.

Sending in the topology is very well detailed in all the examples, so I'll skip all that. This is also when you'd pass in the face-varying data itself, as well as face/uniform data.

Eventually:

hbrMesh->Finish();
OsdFarMeshFactory fact( hbrMesh, myDepth );
OsdFarMesh *farMesh( fact.Create( true ) );

Okay ... now to grab the topology back.

int skipVertCount = farMesh->GetSubdivisionTables()->GetNumVerticesTotal( myDepth - 1 );

const OpenSubdiv::FarPatchTables *pTables = farMesh->GetPatchTables();
const unsigned int *firstVertIdx = pTables->GetFaceVertices( 1 );

int numFaces = pTables->GetNumFaces( 1 );
int vertsPerFace = LOOP ? 3 : 4;

std::vector< int > faceVerts( numFaces * vertsPerFace );
for ( int i = 0; i < numFaces * vertsPerFace; ++i )
{
    faceVerts[ i ] = *( firstVertIdx + i ) - skipVertCount;
}

And you're done! numFaces, vertsPerFace and faceVerts should be enough to build a topology from.

Getting the uniform data back involves iterating over the faces from the HbrMesh, walking up the parent hierarchy to the top, and copying the data back.

Getting the facevarying data back involves pTables->GetFVarDataTable().

Finally vertex data:

OsdCpuComputeController computeCtl;
OsdCpuComputeContext *computeCtx = OsdCpuComputeContext::Create( farMesh );

for each vertex attribute
{
  int dataSize = VECTOR ? 3 : 1;
  OsdCpuVertexBuffer *buf = OsdCpuVertexBuffer::Create( dataSize, farMesh->GetNumVertices() );
  buf->UpdateData( &data[0], 0, data.size() );
  computeCtl.Refine( computeCtx, farMesh->GetKernelBatches(), buf );
  float *refinedData = buf->BindCpuBuffer();
  int numVerts = dataBuffer->GetNumVertices() - skipVertCount;
  std::copy( refinedData + dataSize * skipVertCount, refinedData + dataSize * dataBuffer->GetNumVertices(), target );
}

Again, lots of glossing over stuff (especially facevarying & uniform data) ... don't be shy if you want more details. And definitely don't be shy if you spot something really _really_ bad with this approach :)

- Rob

Colin Doncaster

unread,
Sep 11, 2013, 10:50:35 AM9/11/13
to Rob Pieke, Doug Epps, manuelk, opens...@googlegroups.com
Hi Rob, thank you for the detailed explanation. 

So if I understand correctly, the subdivision tables are a hierarchy of the different subdivision levels?  Requesting the total number of vertices for myDepth - 1 is basically providing the starting index into the global vertex list for the vertices at myDepth?

Why is pTables->GetFaceVertices( 1 ) using an index of 1 (and not, say, 0)?  And I guess the same can be asked of int numFaces = pTables->GetNumFaces( 1 )?

You also seem to be refining the vertex data individually, in the simpleCpu example both vertex position and normals are being passed in the same data structure.  Could this potentially mean multiple vertex attributes can be refined at the same time, and if so is there a benefit to that?  Or, is there a reason why you’re not defining multiple vertex buffers and passing them all to refine at the same time? (ie. line 524 of main.cpp in the limitEval example)

I realize I might be splitting hairs, but just curious as to best practices.  

Open Subdiv is sweet, we also compared it to our internal subdivision code and there was no contest - especially with the various compute context’s.  As with exposing any new API publicly we’ll all need to use it a little differently so hopefully with time it’ll be refined, I’m looking forward to using it more outside the context of general VFXy needs (ie. on android etc.)

Thank you again to all for the help!

Cheers, 
Colin

Colin Doncaster

unread,
Sep 11, 2013, 10:58:19 AM9/11/13
to Rob Pieke, Doug Epps, manuelk, opens...@googlegroups.com
Sorry - just implementing this and realized that pTables->GetFaceVertices( 1 ) is referring to the level, should this not be pTables->getFaceVertices( myDepth) based on your example or do we need to specifically use 1 in this case?

Cheers

On Sep 11, 2013, at 4:15 AM, Rob Pieke <ro...@moving-picture.com> wrote:

Rob Pieke

unread,
Sep 11, 2013, 11:11:17 AM9/11/13
to Colin Doncaster, Doug Epps, manuelk, opens...@googlegroups.com
> pTables->GetFaceVertices( 1 ) is referring to the level,
> should this not be pTables->getFaceVertices( myDepth)?

As I understand it, the FarMesh only stores the coarsest (i.e., original) and finest levels of detail. So, yeah, when dealing with the FarMesh, use 1 ... when dealing with the HbrMesh, use myDepth-1.


> You also seem to be refining the vertex data individually

Yep.


> in the simpleCpu example both vertex position and normals
> are being passed in the same data structure.
> Could this potentially mean multiple vertex attributes
> can be refined at the same time,
> and if so is there a benefit to that?

Ummmmmmmm ... dunno :)

We definitely didn't consciously avoid this option.

- Rob

manuelk

unread,
Sep 11, 2013, 1:11:49 PM9/11/13
to opens...@googlegroups.com, Colin Doncaster, Doug Epps, manuelk


On Wednesday, September 11, 2013 8:11:17 AM UTC-7, Rob Pieké wrote:
> pTables->GetFaceVertices( 1 ) is referring to the level,
> should this not be pTables->getFaceVertices( myDepth)?

As I understand it, the FarMesh only stores the coarsest (i.e., original) and finest levels of detail. So, yeah, when dealing with the FarMesh, use 1 ... when dealing with the HbrMesh, use myDepth-1.

This is perhaps a little muddled in the API, but in uniform mode you can request which levels the FarMeshFactory keeps in the FarPatchTables:

    /// @param firstLevel  First level of subdivision to use when building the
    ///                    FarMesh. The default -1 only generates a single patch
    ///                    array for the highest level of subdivision)
    ///                    Note : firstLevel is only applicable if adaptive is false
    ///

In practice, we find that we often need both the smooth and control cage representations, so we have been looking at exposing this ability since we generalized the patch tables earlier this spring. We are still re-working the imaging core of our animation system, so I am assuming that this feature will eventually bubble up as part of this task...



 
> in the simpleCpu example both vertex position and normals
> are being passed in the same data structure.
> Could this potentially mean multiple vertex attributes
> can be refined at the same time,
> and if so is there a benefit to that?

Ummmmmmmm ... dunno :)

We definitely didn't consciously avoid this option.
 
Absolutely: a fully implemented vertex class could potentially be dynamically managing any arbitrary amount of primvar data.

The Renderman spec is extremely flexible with its implementation of primitive variables: in RIB, vertices often have many data streams attached to them, all with specific interpolation schemes (vertex / varying / face-varying). Representing this data efficiently at the implementation level is not easy... which is in good part why the vertex class is abstracted out with a templated API, so that client code can opt into the features that it may need and not pay for what it doesn't. This also gives the client vertex class flexibility whether the data is interleaved or serialized. You can also use the vertex class API as a call-back system to perform calculations (such as stencil value accumulation - code checked in very soon...)


manuelk

unread,
Sep 11, 2013, 1:16:33 PM9/11/13
to opens...@googlegroups.com, Colin Doncaster, manuelk


On Tuesday, September 10, 2013 2:27:37 PM UTC-7, dougie wrote:

If it's the vertices pushed to the limits, it's something in examples/simpleCpu/ but it's not super-clear what's going on because of the openGL.  
 
I don't think simpleCPU does anything else than uniformly subdivide a mesh: the vertices would not be on the limit.


Reply all
Reply to author
Forward
0 new messages