Stereo Attributes.

39 views
Skip to first unread message

Sam Richards

unread,
Oct 5, 2013, 7:34:29 PM10/5/13
to ves-tech-ca...@googlegroups.com
We are trying to explicitly define a number of stereo attributes that could exist in the exchange format (see: here), they are:
  • Stereo Rig Orientation, which could be one of: Over, Under, Side-by-side
  • Stereo Thru Cam Eye, which could be one of: Left or Right
  • Stereo convergence: distance to convergence, measured in feet/inches, or Infinity if they are parallel to each other.
  • Stereo IA, Stereo Interaxial distance (sometimes incorrectly labeled Stereo Interocular), the distance between the lenses.
Along with serial numbers for the right eye lenses.

If you have done a reasonable amount of on-set stereo work please confirm that these attributes make sense, in particular
the stereo convergence and Stereo IA.

The stereo convergence we know ends up as an angle, but we believe all rigs work as distance to convergence. Has
anybody seen rigs which define this as an angle?

Stereo IA is another one, I regularly see this in camera reports as "IO", but this seems incorrect (since inter-ocular is distance between eyes, and stereo interaxial is distance between lenses), but I am not an expert in this so do let me know if I've gotten something wrong.

We are currently getting conflicting information about these, so have been considering dropping them, but if we can get
some consensus on these, then we will leave them in.

Thanks for any help with this...

Sam.

Wil Manning

unread,
Jan 7, 2014, 6:30:32 AM1/7/14
to ves-tech-ca...@googlegroups.com
Feet/Inches?

I know that's kinda useful for a lot of lens info but ... has there been discussion about units? Maybe a unit flag would be good Metric/Imperial? Or is the units of the field irrelevant given it's just a float data field?

Jon Bragado

unread,
May 16, 2014, 7:15:06 AM5/16/14
to ves-tech-ca...@googlegroups.com
Well, I just give my opinion based on my experience as a matchmove 3D artist.
To add vfx (cgi) to a stereo show the first step is to matchmove the plates for both eyes. We are in charge of delivering a solved camera stereo rig.
Some problems we face which we have to take into accounts in our solves are these: both cameras aren't at the same height level, there is a "vertical shift". Convergence is affected because the the optical axes are skewed, they don't intersect. The right vector of of one of the cameras isn't parallel to the stereoscopic plane (rotation policy). The cameras have different focal lengths and thus differents distortion, which makes us having to create special lens objects as per camera. Asymetric construction artifacts, when the cameras are not at the same depth levels, this happens when mirrors are used, this is the "depth shift". 
When the shots are zoom shots, both cameras have differents zoom factors because its camera has its own lens. This is called "zoom policy" and the direct impact it has on the footage is that a difference of 0,5% in the focal length has a 5pixel difference in a 2k plate. Not to say about lens distortion, which is dependent on its cameras focal leghts. The dependency comes that we fabricate a special lens object for each camera, and this requires having specific lens grid shots captured for each of the cameras. 
To recap all the above, vertical shift, camera vector rotation, depth shift, interaxial distance and zoom factors would be interesting to have.

Jon,

Sebastian Sylwan

unread,
May 20, 2014, 7:00:23 PM5/20/14
to ves-tech-camera-reports
Hi Jon, 

thank you for the input. 
Excuse my ignorance, but when you are on set, your aim is to always have no vertical shift, aligned axes, no rotation, etc. All those are artefacts of the physical inaccuracies of the lenses, rigs, etc. Or am I missing something ? 

If I understand correctly what you are saying, having fields that tell you if a setup is a mirror or if zoom lenses are used would be useful, but most of the other things you mention come as results of your analysis, right ? 

Cheers, 

S


--
You received this message because you are subscribed to the Google Groups "ves-tech-camera-reports" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ves-tech-camera-r...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Message has been deleted
Message has been deleted

Jon Bragado

unread,
May 21, 2014, 6:36:00 AM5/21/14
to ves-tech-ca...@googlegroups.com, syl...@gmail.com
Yes, the aim is to have no vertical shift, the axes to be perfectly aligned, no rotation difference etc. When we are on set, we measure those distances as accurately as we can to know the rig better. But after the regular use of a rig, traveling cameras, moving from one place to another, leaving it in a rest position and the picking it again for its use, if you measure again the distances we found that they are different because all those "worn" signs of use moved the rig a bit. I like to measure it twice at least so that i can calculate the difference. and expect that in my solve (but with a parameter that I already know, If i don't know it I'd need take what the solved gives me and I don't know if you are right or wrong until someone else tries your camera down the pipeline, something not good).
As you were saying before this is the result of inaccuracies on the lenses, rigs and so on. For matchmoving programs, a camera is a perfect camera built with mathematical formulas that are not dependent of the physics of the real world, like the real ones. For that reason, a stereo rig in a matchmove software will always be perfectly parallel in contrast to a real one. We have adaptive algorithms that could compensate for the minimal differences but the result is always uncertain. 
In 3DEqualizer, we are able to type in those difference values before the solve begins, so that 3DE can take it into account. In order to do that, i thought it would be great to have a place to gather the data by doing a couple of measures. 

Lenses are never the same between the two cameras, so if i can register the focal lengths (specially when zooming) for each camera of a given rig, i have more relaiable data. Otherwise i need to rely on the data of my hero camera and take it as a starting point for my non-hero camera. 
When the cameras are parallel you can shoot lens grids per camera as you would normally do in a mono show. But when a mirror is in between the lens and the subject, like in a non parallel rig, the distortion is more problematic becuase whe don't know the characteristics of that mirror, if it's perflectly flat or has a bit of curvature etc etc. Which is a nightmare a lot of the times. But if we have accurate data for the rest we are narrowing the solve and we can let the mirror distortion be adaptive because we have the rest of the values. 

Jon, 
To unsubscribe from this group and stop receiving emails from it, send an email to ves-tech-camera-reports+unsub...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages