Greetings,
my company would be happy to have a portable way for people to play 360 video. I investigated the V2 spec and I have the following comments.
First, it'd be nice if there was out-of-the-box support for a camera of N view degrees, with N <= 360. Our cameras have around 180 view degrees. "N" could be specified with metadata.
Failing that, I thought about how I'd implement the mesh generation. There are two ways you can tackle this.
1) You define a sphere then map the sphere triangles to the texture.
2) You slit the texture in triangles and map them to the sphere.
On my end it looks simpler to do 2). However there's some busy work that must be done to handle the triangles that do not map inside the round image inside the texture. Those triangles may be a) inside the texture but outside the round image, or b) entirely outside the texture.
Case a) can be handled trivially by just handling those triangles like the triangles inside the round image. Case b) requires linking the vertices on the edge of the texture with the point at (0, 0, -1). The resulting mesh looks like a half-sphere morphing into a cone.
Case a) and b) could be avoided if the spec hardcoded the OpenGL background color (black) or allowed the user to do so with metadata (RGB background color metadata). Then it's not necessary to render the triangles located entirely outside the round image and thus the rendering is more efficient.
Hopefully my explanations are clear enough. Thanks for your time.