ReconstructMe Textures

608 views
Skip to first unread message

Christoph Heindl

unread,
Jul 10, 2012, 8:09:37 AM7/10/12
to recons...@googlegroups.com
Hi folks,

we've now created two prototypes for putting color onto 3D models. The first approach colorizes vertices, the second approach projects photos as textures on triangles. While the first approach is easy to implement, it scales badly (as far as memory consumption and subdivision is concerned). Therefore, we'd like to go for textures. Since a single mesh is then textured from multiple perspectives, we need to be able to assign multiple textures to it. Here are three possible ways to do that

  - split mesh at texture boundaries (easy to implement)
  - use multiple texture channels (easy to implement, but limited to graphics card channels)
  - find a 2D parametrization of the mesh and texture in that space (hard to implement)

We'd like to hear the end users opinion on this topic. How would you expect to receive a textured mesh? Is splitting the mesh in sub-parts ok, and if so, what formats are supposed to be able to cope with that? How do other products handle this?

Best,
Christoph



jared....@gmail.com

unread,
Jul 10, 2012, 12:26:45 PM7/10/12
to recons...@googlegroups.com
This is great news.

I don't know that an explicit splitting of the mesh data is required.

One simple approach would be to treat each perspective's texture as a separate material, and vertices can be assigned to the appropriate material instance; OBJ and 3DS certainly support such a specification, and I believe PLY does as well.

Another alternative is to simply "concat" textures from each perspective into a larger, unified grid texture. UVs should be fairly easy to (re)compute as the grid is enlarged.

Are you planning to incorporate any kind of texture/color blending for overlapping perspectives?

On Tuesday, July 10, 2012 5:09:37 AM UTC-7, Christoph Heindl wrote:
> Hi folks,
>
> </div>
> we&#39;ve now created two prototypes for putting color onto 3D models. The first approach colorizes vertices, the second approach projects photos as textures on triangles. While the first approach is easy to implement, it scales badly (as far as memory consumption and subdivision is concerned). Therefore, we&#39;d like to go for textures. Since a single mesh is then textured from multiple perspectives, we need to be able to assign multiple textures to it. Here are three possible ways to do that</div>
>
> </div>
>   - split mesh at texture boundaries (easy to implement)</div>
>   - use multiple texture channels (easy to implement, but limited to graphics card channels)</div>
>   - find a 2D parametrization of the mesh and texture in that space (hard to implement)</div>
>
> </div>
> We&#39;d like to hear the end users opinion on this topic. How would you expect to receive a textured mesh? Is splitting the mesh in sub-parts ok, and if so, what formats are supposed to be able to cope with that? How do other products handle this?</div>
>
> </div>
> Best,</div>
> Christoph</div>
>
> </div>
>
> </div>
>
> </div>

Christoph Heindl

unread,
Jul 11, 2012, 2:18:44 AM7/11/12
to recons...@googlegroups.com


Am Dienstag, 10. Juli 2012 18:26:45 UTC+2 schrieb jared....@gmail.com:
 
One simple approach would be to treat each perspective's texture as a separate material, and vertices can be assigned to the appropriate material instance; OBJ and 3DS certainly support such a specification, and I believe PLY does as well.
Another alternative is to simply "concat" textures from each perspective into a larger, unified grid texture.  UVs should be fairly easy to (re)compute as the grid is enlarged.

Of course, we can always generate a texture atlas, but at one point we are stuck: consider a triangle A and an adjacent triangle B sharing an edge (i.e two vertices). Also assume that triangle A is textured by Image i0 and B is textured by image i1. Both images are concated in a larger texture. 

Now the shared vertices would need two different UV coordinates to render correctly, one set of UV coordinates for each image, correct? If so, we can either split the mesh at the edge and duplicate the vertices, or use multiple texture channels.

At this point we are stuck.

 
Are you planning to incorporate any kind of texture/color blending for overlapping perspectives?  

Yes, exposure compensation and blending. We consider this a completely separate step from the final texturing, which is why it isn't discussed here.

Best,
Christoph 

jared....@gmail.com

unread,
Jul 11, 2012, 11:28:46 AM7/11/12
to recons...@googlegroups.com
So, I would consider rendering and exporting somewhat separately.

Maintaining a mesh representation with non-repeating vertices (and an arbitrary number of texture channels) has a number of benefits; it's both faster and less delicate to split this format into submeshes as a post-processing step (one texture per submesh) than it is to combine a given number of submeshes into a unified mesh with non-repeating vertices. This gives you more freedom at export to handle multiple output formats. It's also more convenient for other post-processing steps like mesh reconstruction and so forth.

If I recall correctly, OBJ supports but a single UV channel, so you would need to split the mesh at export regardless. However, Collada, FBX and several other formats can all handle multiple UV channels.

On the rendering side, I'm unaware of any modern GPU that supports less than 4 hardware UV channels. If you can limit a single vertex to such a number, I can't see you running into any issues. At the very least, the driver will do the right thing in software when you use a greater number of channels. You will hit a limit in pathological reconstruction cases, I suppose it's a bit of a balancing act.

Regardless, you're certainly right about 2D parametrization; it's difficult enough to get right even with nice manifold, watertight meshes.

Cheers,
Jared

On Tuesday, July 10, 2012 11:18:44 PM UTC-7, Christoph Heindl wrote:
> Am Dienstag, 10. Juli 2012 18:26:45 UTC+2 schrieb <a>jared....@gmail.com</a>:
>  </div><blockquote class="gmail_quote" style="margin:0;margin-left:0.8ex;border-left:1px #ccc solid;padding-left:1ex">One simple approach would be to treat each perspective&#39;s texture as a separate material, and vertices can be assigned to the appropriate material instance; OBJ and 3DS certainly support such a specification, and I believe PLY does as well. </blockquote><blockquote class="gmail_quote" style="margin:0;margin-left:0.8ex;border-left:1px #ccc solid;padding-left:1ex">Another alternative is to simply &quot;concat&quot; textures from each perspective into a larger, unified grid texture.  UVs should be fairly easy to (re)compute as the grid is enlarged. </blockquote>
>
> </div>
> Of course, we can always generate a texture atlas, but at one point we are stuck: consider a triangle A and an adjacent triangle B sharing an edge (i.e two vertices). Also assume that triangle A is textured by Image i0 and B is textured by image i1. Both images are concated in a larger texture. </div>
>
> </div>
> Now the shared vertices would need two different UV coordinates to render correctly, one set of UV coordinates for each image, correct? If so, we can either split the mesh at the edge and duplicate the vertices, or use multiple texture channels.</div>
>
> </div>
> At this point we are stuck.</div>
>
> </div>
>  </div><blockquote class="gmail_quote" style="margin:0;margin-left:0.8ex;border-left:1px #ccc solid;padding-left:1ex">Are you planning to incorporate any kind of texture/color blending for overlapping perspectives?  
>
> </blockquote>
>
> </div>
> Yes, exposure compensation and blending. We consider this a completely separate step from the final texturing, which is why it isn&#39;t discussed here.</div>

Christoph Heindl

unread,
Jul 12, 2012, 3:22:46 AM7/12/12
to recons...@googlegroups.com


Am Mittwoch, 11. Juli 2012 17:28:46 UTC+2 schrieb jared....@gmail.com:
So, I would consider rendering and exporting somewhat separately.    

Maintaining a mesh representation with non-repeating vertices (and an arbitrary number of texture channels) has a number of benefits; it's both faster and less delicate to split this format into submeshes as a post-processing step (one texture per submesh) than it is to combine a given number of submeshes into a unified mesh with non-repeating vertices. This gives you more freedom at export to handle multiple output formats.  It's also more convenient for other post-processing steps like mesh reconstruction and so forth.

I agree.
 

If I recall correctly, OBJ supports but a single UV channel, so you would need to split the mesh at export regardless.  However, Collada, FBX and several other formats can all handle multiple UV channels.  

Good to know.
 

On the rendering side, I'm unaware of any modern GPU that supports less than 4 hardware UV channels.  If you can limit a single vertex to such a number, I can't see you running into any issues.  At the very least, the driver will do the right thing in software when you use a greater number of channels.  You will hit a limit in pathological reconstruction cases, I suppose it's a bit of a balancing act.

Our experiments show that the drivers won't do the right thing in software. We've tested with a hardware that supports 4 channels and used 25 in software, which failed.
 

Regardless, you're certainly right about 2D parametrization; it's difficult enough to get right even with nice manifold, watertight meshes.

This is also our impression.

Thanks for your input.

Best,
Christoph 

Mark Schafer

unread,
Jul 13, 2012, 4:22:01 AM7/13/12
to recons...@googlegroups.com
Please don't split the mesh and double up on verts at texture boundaries. Instead add a UV set.
In OBJ format a face has V, VT, VN pointers. So each face can have same position but diff UV if desired. There is no problem with this.
The problem comes when OBJ readers expect trivial files. IMHO the readers are often to blame (and the way you can organise obj files as relative or absolute offsets doesn't help).

So many textures with associated UV sets - non overlapping, draped on original vertices.

Choose sets of UVs to associate with a map. Choose which faces get a given map by choosing face normal most aligned to a given texture camera. Give some variation so that you get  clumping along boundaries.
For multiple overlapping UV sets - If you keep, internally, all multiple UV sets one for each texture - then you can allow user, in export tool, to define seams themselves to avoid unpleasant texture artifacts. Advanced UI which may not be required is you have many textures.
Also can compare the different textures on a face to see how varied they are from each other.
If very different then compare same with surrounding faces and choose lowest entropy texture.

You may have realtime problem if your shaders try to use overlapping UV sets. Avoid it in RT display. pick one. (use either metric above or random)

Next level is to create a homogenous mapping of entire surface, amalgamate textures into single one and achieve uniform mapping. But hey - first things first.
Reply all
Reply to author
Forward
0 new messages