So, I would consider rendering and exporting somewhat separately.
Maintaining a mesh representation with non-repeating vertices (and an arbitrary number of texture channels) has a number of benefits; it's both faster and less delicate to split this format into submeshes as a post-processing step (one texture per submesh) than it is to combine a given number of submeshes into a unified mesh with non-repeating vertices. This gives you more freedom at export to handle multiple output formats. It's also more convenient for other post-processing steps like mesh reconstruction and so forth.
If I recall correctly, OBJ supports but a single UV channel, so you would need to split the mesh at export regardless. However, Collada, FBX and several other formats can all handle multiple UV channels.
On the rendering side, I'm unaware of any modern GPU that supports less than 4 hardware UV channels. If you can limit a single vertex to such a number, I can't see you running into any issues. At the very least, the driver will do the right thing in software when you use a greater number of channels. You will hit a limit in pathological reconstruction cases, I suppose it's a bit of a balancing act.
Regardless, you're certainly right about 2D parametrization; it's difficult enough to get right even with nice manifold, watertight meshes.
Cheers,
Jared
On Tuesday, July 10, 2012 11:18:44 PM UTC-7, Christoph Heindl wrote:
> Am Dienstag, 10. Juli 2012 18:26:45 UTC+2 schrieb <a>
jared....@gmail.com</a>:
> </div><blockquote class="gmail_quote" style="margin:0;margin-left:0.8ex;border-left:1px #ccc solid;padding-left:1ex">One simple approach would be to treat each perspective's texture as a separate material, and vertices can be assigned to the appropriate material instance; OBJ and 3DS certainly support such a specification, and I believe PLY does as well. </blockquote><blockquote class="gmail_quote" style="margin:0;margin-left:0.8ex;border-left:1px #ccc solid;padding-left:1ex">Another alternative is to simply "concat" textures from each perspective into a larger, unified grid texture. UVs should be fairly easy to (re)compute as the grid is enlarged. </blockquote>
>
> </div>
> Of course, we can always generate a texture atlas, but at one point we are stuck: consider a triangle A and an adjacent triangle B sharing an edge (i.e two vertices). Also assume that triangle A is textured by Image i0 and B is textured by image i1. Both images are concated in a larger texture. </div>
>
> </div>
> Now the shared vertices would need two different UV coordinates to render correctly, one set of UV coordinates for each image, correct? If so, we can either split the mesh at the edge and duplicate the vertices, or use multiple texture channels.</div>
>
> </div>
> At this point we are stuck.</div>
>
> </div>
> </div><blockquote class="gmail_quote" style="margin:0;margin-left:0.8ex;border-left:1px #ccc solid;padding-left:1ex">Are you planning to incorporate any kind of texture/color blending for overlapping perspectives?
>
> </blockquote>
>
> </div>
> Yes, exposure compensation and blending. We consider this a completely separate step from the final texturing, which is why it isn't discussed here.</div>