Bit quiet at work today while I wait for a shot to bubble through, so thought I'd do a quick writeup
TL;DR - it's not geometric distance, it's the focal plane distance (as in, the distance along the camera axis to the focal plane that contains the point you're looking at) - if anyone wants I can do up a sketch and a decenter write up, just yell, this was me poking around in maya more than the maths
As a simple proof/demo:
- create a camera, create a polycube.
- set the transform on the polycube to be -1 on Z. Distance will be 1.
- set the transform on the polycube to be -1 on Z, and 1 on X. Distance is still 1, even though we have clearly changed the geometric distance, it's still sliding on the focal plane 1 unit away from the camera.
- move your camera wherever you want on the X/Y plane. Distance is still 1, because you're moving on the plane.
- Rotate your camera. Distance changes because by rotation, you're shifting your focal plane around
And the longwinded answer..
Under the hood in Maya, I suspect the object's bounding box center is being used (I thought it was the worldMatrix at first, but you can see this isn't the case trivially by taking a cube, and moving all the verts. Distance will change, even though the worldMatrix is static). Disclaimer - I've only done limited testing so you might find a novel case where this doesn't hold, but the problem is more for you to figure out what point you want to test.
I am pretty sure this is the case in the HUD however, because it makes sense AND it explains the funny quirk that a camera is always apparently 0.057 units away
from itself - the camera's bbox center happens to be [0,0.210,-0.057] units in local space offset and since it's in local space the only distance we care about is the traversal along the cam normal which is -Z, and 0.057 units. Tada! :D
# here's some sample code to illustrate, more verbose than usual by way of
# documentation - we use PyMEL here at the office by the way, but that doesn't
# change the principles
import pymel.core as pm
# select camera, and apply the worldMatrix to the -Z basis vector to get the
# perturbed camera normal. this will accommodate scale/rotate/skew if you really
# really wanted to do that, and because we're using the WM it should manage
# all manner of grouping etc etc
# (camera points down -ve Z axis unless you've done something *incredibly* funky)
cam = pm.PyNode('camera1') # just selecting the camera I have here
cam_normal = cam.getAttr('worldMatrix').transpose() * pm.dt.Vector([0,0,-1])
# Transpose to switch between row vs col major w/pymel kindly mangling
# the vector with an implicit zero element in position 4 I believe.
# You probably want to rewrite this for MVectors if you're happier with them
cam_normal.normalize()
cam_position = cam.getAttr('worldMatrix').translate
# do NOT use center, it will be wrong even if you account for the parent
# this is where you'd want to start to loop over your object list
distance_object = pm.PyNode('pCube2')
object_vector = distance_object.getAttr('parentMatrix').translate + distance_object.getAttr('center') - cam_position
# GOTCHA: the .center attribute folds in the local object space transform,
# but won't give you parent level transforms so you need to do that explicitly
# if you use this approach style.
print object_vector.dot(cam_normal)
# project the object vector onto the camera normal (yay for dot products) to
# find out how may units along it we are.
# vector and normal point away, so +ve numbers are in front, and -ve are behind.
-Anthony