Proposal for Camera type in Alembic

823 views
Skip to first unread message

Joe Ardent

unread,
Mar 8, 2011, 3:44:31 PM3/8/11
to alembic-d...@googlegroups.com
Hello, Alembic Community! My name is Joe Ardent, and I'm one of the
core Alembic library developers. We've been having a discussion among
ourselves regarding cameras as a pseudo-primitive type in Alembic, in
the same way that, eg, PolyMeshes are a pseudo-primitive type. There's
been a lot of interest in this, and so we wanted to bring what we have
so far to the table for feedback and commentary.

First, here's a little information on what defining a type means. The
geometric types in Alembic are represented by "Schemas", which in the
context of the AbcGeom library, are a collection of typed Properties
with associated names, values, and getters and setters. A
"DoubleProperty" means that values are read, written, and stored as
64-bit floating point numbers. Units, such as cm vs. inches, are not
explicitly stored in Alembic, so conventions for interpretation of the
stored values are social/documentation.

The main goal is the specification of a simple, greatest common
denominator Camera that can be used in a variety of contexts where one
might wish to cache out camera-related data. As such, the spec we've
been converging toward is not a rig, stereo or otherwise. In fact,
information like location and orientation is not carried within the
Camera itself; that information would come from being parented to a
transform (which itself could be parented to an entire hierarchy of
transforms). We also want to ensure that at a minimum, Renderman-style
properties such as field of view and screen window are capturable. So
without further preamble, here's what we have so far:


*CORE PROPERTIES*

These are the very basic camera properties that we feel should be captured.

- DoubleProperty focalLength (in mm, default value: 35.0): camera
focal length

- DoubleProperty fStop (default value: 5.6): Optical property of the
lens. Focal length divided by "effective" lens diameter.

- DoubleProperty[2] horizontalFilmAperture/verticalFilmAperture: two
doubles (in cm, default values: 3.6, 2.4): Horizontal/Vertical size of
the camera filmback

- DoubleProperty[2] horizontalFilmOffset/verticalFilmOffset two
doubles (in cm, default values: 0.0, 0.0): horizontal/vertical film back
offsets


*ADDITIONAL PROPERTIES*

These are hopefully-noncontroversial properties that we feel would be
broadly useful without over-constraining the definition.

- DoubleProperty overscan (default: 1.0): scaling value representing
the percent over the film viewable frustum to display

- DoubleProperty cameraScale (default: 1.0): Scale of the camera focal
length for simulating differently sized cameras without changing other
attributes

- DoubleProperty lensSqueezeRatio (default:1.0): the amount the
camera�s lens compresses the image horizontally (the width/height lens
aspect ratio)

- enum filmFit (default: fill): Controls the size of the resolution
gate relative to the film gate (one of fill/horizontal/vertical/overscan)

- DoubleProperty filmFitOffset (cm, default: 0): Offsets the
resolution gate relative to the film gate either vertically (if filmFit
is Horizontal) or horizontally (if filmFit is Vertical). Film Fit Offset
has no effect if filmFit is Fill or Overscan.

- DoubleProperty[2] nearClipPlane/farClipPlane (cm, default: 0.1,
100000.0): Distance from the camera to the near/far clipping plane
(object space)


*QUESTIONABLE PROPERTIES*

Below are some open issues about which we don't have a good idea
regarding whether or not they should be included, and/or how they should
be represented.

* shutterAngle/shutterTime: these make sense on physical cameras
(shutterAngle only applies to film, not digital cameras) but don't make
sense in CG (other than for initial matchmoving purposes, as extra data
to help the solve).

* Seperate frame-relative shutter open/close values DO make sense at
rendertime (to indicate whether motion blur is center-framed,
forward-framed, etc.), but aren't camera specific, and only act as a
sliding window to determine which samples to use, within the Alembic
file, for rendering purposes. Typically this is something set per show,
and optionally tweaked at render-time within a lighting package. I vote
for leaving this out of the camera definition.

* ortho cameras: support for ortho cameras would be nice, but should be
a separate 'OrthoCamera' object type (instead of adding a boolean +
'orthoWidth' parameter to the perspective camera)


And there we have it. We're hoping that for our next beta update,
we'll have a Camera type in Alembic, but we want to ensure that the
community's needs will be met. We look forward to engaging with you all
to make that happen.


-Joe Ardent, Alembic developer

Colin Doncaster

unread,
Mar 8, 2011, 4:26:44 PM3/8/11
to alembic-d...@googlegroups.com
Hey Joe -

Sorry to jump onto a point that you specifically said you "weren't" going to cover - but I think one major issue with stereo production is the lack of standardization especially for camera information like floating windows, convergence etc. Although stereoscopic productions aren't the norm yet ( though I imagine a lot of folks watching this discussion could be involved in a stereo production ) it seems this could introduce a hole in the Alembic spec where each studio will, from the get go, end up with differing properties. To me this stereo camera information is one of the biggest gotcha's in passing data both internally and externally for stereo productions.

I think shutter time should end up in "Additional Properties" - although it may not be needed in many instances, it does capture the intention of the cinematographer which may be useful for reference somewhere in a pipeline. Though this may be mis-using the file format as that would be meta data vs. baked/cached data and whether or not that's what is intended. Some shows are still being shot with film and many of them with varying shutter angles to capture a "look" that will need to be matched.

cameraScale seems very generically named, if it's scaling the focal length could/should it be focalScale?

Thanks!
Colin

On 2011-03-08, at 3:44 PM, Joe Ardent wrote:

> Hello, Alembic Community! My name is Joe Ardent, and I'm one of the core Alembic library developers. We've been having a discussion among ourselves regarding cameras as a pseudo-primitive type in Alembic, in the same way that, eg, PolyMeshes are a pseudo-primitive type. There's been a lot of interest in this, and so we wanted to bring what we have so far to the table for feedback and commentary.
>
> First, here's a little information on what defining a type means. The geometric types in Alembic are represented by "Schemas", which in the context of the AbcGeom library, are a collection of typed Properties with associated names, values, and getters and setters. A "DoubleProperty" means that values are read, written, and stored as 64-bit floating point numbers. Units, such as cm vs. inches, are not explicitly stored in Alembic, so conventions for interpretation of the stored values are social/documentation.
>
> The main goal is the specification of a simple, greatest common denominator Camera that can be used in a variety of contexts where one might wish to cache out camera-related data. As such, the spec we've been converging toward is not a rig, stereo or otherwise. In fact, information like location and orientation is not carried within the Camera itself; that information would come from being parented to a transform (which itself could be parented to an entire hierarchy of transforms). We also want to ensure that at a minimum, Renderman-style properties such as field of view and screen window are capturable. So without further preamble, here's what we have so far:
>
>
> *CORE PROPERTIES*
>
> These are the very basic camera properties that we feel should be captured.
>
> - DoubleProperty focalLength (in mm, default value: 35.0): camera focal length
>
> - DoubleProperty fStop (default value: 5.6): Optical property of the lens. Focal length divided by "effective" lens diameter.
>
> - DoubleProperty[2] horizontalFilmAperture/verticalFilmAperture: two doubles (in cm, default values: 3.6, 2.4): Horizontal/Vertical size of the camera filmback
>
> - DoubleProperty[2] horizontalFilmOffset/verticalFilmOffset two doubles (in cm, default values: 0.0, 0.0): horizontal/vertical film back offsets
>
>
> *ADDITIONAL PROPERTIES*
>
> These are hopefully-noncontroversial properties that we feel would be broadly useful without over-constraining the definition.
>
> - DoubleProperty overscan (default: 1.0): scaling value representing the percent over the film viewable frustum to display
>
> - DoubleProperty cameraScale (default: 1.0): Scale of the camera focal length for simulating differently sized cameras without changing other attributes
>

> - DoubleProperty lensSqueezeRatio (default:1.0): the amount the camera’s lens compresses the image horizontally (the width/height lens aspect ratio)


>
> - enum filmFit (default: fill): Controls the size of the resolution gate relative to the film gate (one of fill/horizontal/vertical/overscan)
>
> - DoubleProperty filmFitOffset (cm, default: 0): Offsets the resolution gate relative to the film gate either vertically (if filmFit is Horizontal) or horizontally (if filmFit is Vertical). Film Fit Offset has no effect if filmFit is Fill or Overscan.
>
> - DoubleProperty[2] nearClipPlane/farClipPlane (cm, default: 0.1, 100000.0): Distance from the camera to the near/far clipping plane (object space)
>
>
> *QUESTIONABLE PROPERTIES*
>
> Below are some open issues about which we don't have a good idea regarding whether or not they should be included, and/or how they should be represented.
>
> * shutterAngle/shutterTime: these make sense on physical cameras (shutterAngle only applies to film, not digital cameras) but don't make sense in CG (other than for initial matchmoving purposes, as extra data to help the solve).
>
> * Seperate frame-relative shutter open/close values DO make sense at rendertime (to indicate whether motion blur is center-framed, forward-framed, etc.), but aren't camera specific, and only act as a sliding window to determine which samples to use, within the Alembic file, for rendering purposes. Typically this is something set per show, and optionally tweaked at render-time within a lighting package. I vote for leaving this out of the camera definition.
>
> * ortho cameras: support for ortho cameras would be nice, but should be a separate 'OrthoCamera' object type (instead of adding a boolean + 'orthoWidth' parameter to the perspective camera)
>
>
> And there we have it. We're hoping that for our next beta update, we'll have a Camera type in Alembic, but we want to ensure that the community's needs will be met. We look forward to engaging with you all to make that happen.
>
>
> -Joe Ardent, Alembic developer
>

> --
> You received this message because you are subscribed to the Google
> Groups "alembic-discussion" group.
> To post to this group, send email to alembic-d...@googlegroups.com
> To unsubscribe from this group, send email to
> alembic-discuss...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/alembic-discussion?hl=en
>
> For RSS or Atom feeds related to Alembic, see:
>
> http://groups.google.com/group/alembic-dev/feeds
>
> http://groups.google.com/group/alembic-discussion/feeds

Joe Ardent

unread,
Mar 8, 2011, 5:22:24 PM3/8/11
to alembic-d...@googlegroups.com
Colin Doncaster wrote:
>
> Sorry to jump onto a point that you specifically said you "weren't" going to cover - but I think one major issue with stereo production is the lack of standardization especially for camera information like floating windows, convergence etc. Although stereoscopic productions aren't the norm yet ( though I imagine a lot of folks watching this discussion could be involved in a stereo production ) it seems this could introduce a hole in the Alembic spec where each studio will, from the get go, end up with differing properties. To me this stereo camera information is one of the biggest gotcha's in passing data both internally and externally for stereo productions.
>

Hey Colin, nothing is off the table, and I'm glad for your feedback!

The problem with formalizing a stereo rig in the code itself is
principally one of lack of flexibility. There are many ways that people
already rig their stereo cameras, and figuring out a way to ensure that
such rigs are stored in a way that is perfectly interconvertible through
Alembic is, it seems to us, intractable. This is even before getting
into the issue of non-stereo, multi-lens setups (imagine you have three
lenses for your project, two in a stereo config and one for reference).

Because of the inherent fragility of camera rigs, I would imagine that
for the most part, Cameras in Alembic would be used as proxy objects to
get the scene set up 80-90%, at which point, the Alembic node is
replaced with a live camera rig appropriate for that project.

Still, if consensus for a canonical N-lens setup is achieved here, we'd
be more than happy to include it in the official Alembic spec. Or, if
it comes to pass that a consensus convention arises as people use
Alembic for exchange, it would be appropriate to formalize that as well.

> I think shutter time should end up in "Additional Properties" - although it may not be needed in many instances, it does capture the intention of the cinematographer which may be useful for reference somewhere in a pipeline. Though this may be mis-using the file format as that would be meta data vs. baked/cached data and whether or not that's what is intended. Some shows are still being shot with film and many of them with varying shutter angles to capture a "look" that will need to be matched.
>

Thank you for the commentary!

> cameraScale seems very generically named, if it's scaling the focal length could/should it be focalScale?
>

This is an excellent point, and I think we should take it.

-Joe

Francois Chardavoine

unread,
Mar 8, 2011, 6:56:54 PM3/8/11
to alembic-discussion
One basic thing that is pretty typical (I believe) and worth adding is
a postScale property.

I've included pseudo-code below as to how the properties all
eventually end up affecting the ScreenWindow and FOV parameters that
would be needed for back-end Renderman-style rendering. This should
help clarify what each property's contribution is.
(And hopefully the formatting doesn't get completely mangled when
posting!)
Francois.

SCREEN WINDOW / FOV CALCULATIONS
---------------------------------------------------------------

// calculate aperY - note: aperture X is same as squeeze
aperY = verticalFilmAperture / horizontalFilmAperture;

// calculate the offsets and normalize y offset with respect to x
offset and aspect ratio
offsetX = (horizontalFilmOffset / horizontalFilmAperture) *
lensSqueezeRatio;
offsetY = verticalFilmOffset / horizontalFilmAperture;

// calculate the base window/film aspect ratios
windowAspect = horizontalFilmAperture / verticalFilmAperture;
filmAspect = lensSqueezeRatio * horizontalFilmAperture /
verticalFilmAperture;


// initialize the scale/translate values
scaleX = scaleY = 1.0;
translateX = translateY = 0.0;

// modulate scale/translate values based on filmFit
switch filmFit:
case FILL:
if (windowAspect < filmAspect) scaleX = windowAspect /
filmAspect;
if (windowAspect >= filmAspect) scaleY = filmAspect /
windowAspect;
case HORIZONTAL:
scaleY = filmAspect / windowAspect;
if (scaleY > 1.0)
translateY = filmFitOffset * (aperY - (aperY * scaleY)) *
0.5;
case VERTICAL:
scaleX = windowAspect / filmAspect;
if (scaleX > 1.0)
translateX = filmFitOffset * 0.5 * (lensSqueezeRatio -
(lensSqueezeRatio * scaleX));


// apply the overscan
scaleX *= overscan;
scaleY *= overscan;

// apply post scale
scaleX /= postScale;
scaleY /= postScale;


// calculate the screen window
hAperX = 0.5 * lensSqueezeRatio;
SCREEN_WINDOW_LEFT = 2.0 * camScale * (-hAperX * scaleX + offsetX +
translateX);
SCREEN_WINDOW_RIGHT = 2.0 * camScale * ( hAperX * scaleX + offsetX +
translateX);
SCREEN_WINDOW_BOTTOM = 2.0 * camScale * (-0.5 * aperY * scaleY
+offsetY + translateY);
SCREEN_WINDOW_TOP = 2.0 * camScale * ( 0.5 * aperY * scaleY
+offsetY + translateY);


// calculate the field of view
// note: focalLength is in mm while horizontal film aperture is in cm
H_FIELD_OF_VIEW = 2.0 * atan(
horizontalFilmAperture * 10.0 /
(2.0 * focalLength )) *
180.0 / 3.1415926535897932;

Rob Bredow

unread,
Mar 9, 2011, 1:41:17 PM3/9/11
to alembic-discussion

> I've included pseudo-code below as to how the properties all
> eventually end up affecting the ScreenWindow and FOV parameters that
> would be needed for back-end Renderman-style rendering. This should
> help clarify what each property's contribution is.

After reading through the pseudocode, I'm inclined to think that
simpler is better for Alembic. I propose removing the filmFit option
and define the camera to always fit horizontally. I believe most apps
already use a horizontal fit, and if someone preferred another fit
format, it could be converted on load/translation time. I'm hoping a
simpler decision here yields more reliable implementations in all the
apps.

For ShutterAngle/ShutterTime, I agree that's it's primarily useful
when storing data from shot on-set and I do think it should be an
optional property in Alembic cameras. For the way to store this
information, I prefer to use the terms ShutterOpen/ShutterClose for
clarity (even though other terms may be more common on set, I think
this is a good universal expression). ShutterOpen/ShutterClose would
be expressed in terms of the fraction of the frame time where the
shutter opened and closed. An example ShutterOpen/ShutterClose time
could be -0.25/0.25 to express a 180 degree shutter where the
matchmove is centered in the "middle" of the action. 0.0/0.5 would
express a 180 degree shutter where the matchmove is positioned on the
leading edge of the frame.

Glad to be having this discussion on this list.

Thanks--

Rob Bredow

Colin Doncaster

unread,
Mar 9, 2011, 2:05:06 PM3/9/11
to alembic-d...@googlegroups.com

Sorry if I've mis-read this, wouldn't 0.0/0.5 be the trailing end of frame? ( with -0.5/0.0 being leading in this instance ) I can't help feel that it might be better to separate the definition of shutter angle ( time ) and frame centre into two different values as it feels more intuitive ( possibly more clear too ).

A standard filmFit solution sounds ideal. :)

Francois Chardavoine

unread,
Mar 9, 2011, 2:24:39 PM3/9/11
to alembic-d...@googlegroups.com, Rob Bredow
Rob Bredow wrote, On 03/09/11 10:41:

>> I've included pseudo-code below as to how the properties all
>> eventually end up affecting the ScreenWindow and FOV parameters that
>> would be needed for back-end Renderman-style rendering. This should
>> help clarify what each property's contribution is.
>>
> After reading through the pseudocode, I'm inclined to think that
> simpler is better for Alembic. I propose removing the filmFit option
> and define the camera to always fit horizontally. I believe most apps
> already use a horizontal fit, and if someone preferred another fit
> format, it could be converted on load/translation time. I'm hoping a
> simpler decision here yields more reliable implementations in all the
> apps.
>

I have no big issue with removing the filmFit property and 'hardcoding'
to a horizontal fit. It's one of those more philosophical questions of
"if I set things in an application (like Maya) then export and reload an
Alembic file, what should I expect to be restored?". In some cases the
parameters would be restored as-is, but for other higher-level
parameters these would be lost and lower-level parameters will have been
modified to produce the same visual result.

In this case, if filmFit isn't preserved and is set to something other
than 'horizontal' in maya (or any other app which allows you to specify
it), it means at export time the horizontal/verticalAperture values will
be scaled accordingly (so that the rendered result matches what was seen
in maya). We just need to figure out if it's acceptable that low-level
values (like the filmback size) may be changed on the fly at export time
from what the user had set.

The same argument applies to something like 'cameraScale'.

This is something we struggled with internally, and ended up going with
a more 'expressive' camera parameter model (preserving some of the more
'superfluous' maya parameters) to avoid dealing with loss of information
and possibly changing low-level values. These obviously would have
reasonable default values, and would only contribute if they had been
changed from their defaults. There's a good case to be made for
simplicity as well, but it does potentially limit the places where an
Alembic files can be used without also saving extra metadata along with
the camera (if the source parameters are required for certain workflows).

Francois.

Andrew D Lyons

unread,
Mar 9, 2011, 3:21:21 PM3/9/11
to alembic-d...@googlegroups.com, Francois Chardavoine, Rob Bredow
The Maya camera rig contains a lot of values that are not defined, or do not map cleanly to any renderer - let alone any other 3d application. I would argue that Maya's camera rig is an anomaly, and Alembic should NOT seek to exhaustively support all it's settings. Specifically, "Horizontal" fit should be the only supported fit resolution gate setting in Maya.

As a side note, if you look at the way the maya creates the fov value, you find metric to imperial translation constants (0.03937) that have been rounded to 5 decimal places. When matching Maya rib exports to other packages this rounding can cause significant registration divergences in deep scenes. Although I'm by no means a fan of using Maya as a benchmark for the Alembic camera specification, from a practical point of view perhaps Alembic should compensate for this dodgy math in some way?

http://forums.cgsociety.org/archive/index.php/t-726041.html

Cheers


Rob Bredow

unread,
Mar 10, 2011, 1:05:49 AM3/10/11
to alembic-discussion


On Mar 9, 11:05 am, Colin Doncaster <colin.doncas...@gmail.com> wrote:
> Sorry if I've mis-read this, wouldn't 0.0/0.5 be the trailing end of frame?  ( with -0.5/0.0 being leading in this instance )

You are correct. I had it backwards in my original explanation. Maybe
I think this is more intuitive because I'm used to it from Renderman
all these years. Alternatively would you prefer:

shutter angle (in seconds? in degrees like a real camera? Radians?
(yuck))
shutter frame center: what would be the best units for this one? -1,
0, +1?

Either way will certainly work. Definitely open to the clearest
solution here.

Thanks-

-Rob

Moritz Moeller

unread,
Mar 10, 2011, 6:07:01 AM3/10/11
to alembic-discussion
On Mar 10, 6:05 am, Rob Bredow <rob.bre...@gmail.com> wrote:
> You are correct. I had it backwards in my original explanation. Maybe
> I think this is more intuitive because I'm used to it from Renderman
> all these years. Alternatively would you prefer:
>
>   shutter angle (in seconds? in degrees like a real camera? Radians?
> (yuck))
>   shutter frame center: what would be the best units for this one? -1,
> 0, +1?
>
> Either way will certainly work. Definitely open to the clearest
> solution here.

I suggest to store values for shutter open and shutter close.
This hast the following advantages:

- The values use the same unit: signed fractions of a frame
- No fps value is required (as would be needed to convert a time
delta shutter to a frame fraction)

Examples:

Shutter opens on frame, 180 deg: 0 .. 0.5
Shutter centered on frame, 90 deg: -0.25 .. 0.25

In the 2nd example sample data from the previous frame would be
needed.


.mm

Andy Lomas

unread,
Mar 10, 2011, 6:23:03 AM3/10/11
to alembic-discussion
A few of my thoughts:

I think it would be best if focalLength and horizontalFilmAperture /
verticalFilmAperture / horizontalFilmOffset / verticalFilmOffset were
all in the same units so that there is no conversion factor needed. My
suggestion would be mm throughout.

Overscan: shouldn't we have a separate horizontal and vertical
overscan values?

Shutter: I've always preferred simple [-1,1] shutter open and close
values as being the least ambiguous way of describing the shutter open
interval (assuming that we've got non-rolling shutters with instant
shutter open and close, but I assume we don't want to even consider
such things in the standard camera model).

Andy

Colin Doncaster

unread,
Mar 10, 2011, 6:33:31 AM3/10/11
to alembic-d...@googlegroups.com
I would have thought degrees be best as it's an already understood language. ShutterOpen/Close still feels very renderman influenced and I'm assuming the camera information is supposed to be generic? It feel like I might be the only one pushing for this though.

-1,0,+1 seems like a good way of describing frame centre, this still gives one the option ( if so desired ) to actually set it to -2 and have a 720 degree shutter angle for some odd but interesting effects ( and to that point, the format shouldn't impose restrictions ). :)

Cheers,
Colin

Jonathan Gibbs

unread,
Mar 10, 2011, 1:53:12 PM3/10/11
to alembic-d...@googlegroups.com, Joe Ardent
First, thanks so much for including us in this process!

In the spirit of Rob's comments, I agree that less is more here. There
should be the fewest parameters required to describe the camera.

So, for instance, I suggest removing "overscan". You get the same
result by changing the filmApertures. For the Maya exporter, it could
store an attribute called "overscan" to indicate how much overscan was
set on the Maya camera, but still bake the overcan into the
filmAperture. The, the Maya importer could see that "overscan"
attribute and create a Maya camera which better matched the Maya
camera originally exported. This gives you the cleanest round-trip in
Maya, but doesn't make all the other apps need to understand a
redundant attribute like "overscan".

I also agree that it would be nice to stick to "mm" for the film
apertures so it's in the same units as the focal length.

Otherwise, focalLength, fStop, *filmAperture and *filmOffset look
great to me, as do nearClipPlane and farClipPlane.

As mentioned above, I would burn the overscan into the filmApertures,
same also for cameraScale and filmFit and filmFitOffset. None of them
seem needed for a complete description of the camera.

I think lensSqueezeRatio is needed (but I've never used it personally).

I'm surprised to not see any DOF-related attributes aside from fStop.
What about the focus distance?

> * shutterAngle/shutterTime: these make sense on physical cameras
> (shutterAngle only applies to film, not digital cameras) but don't make
> sense in CG (other than for initial matchmoving purposes, as extra data to
> help the solve).

I've never used these, so I'll let those who have comment.

> * Seperate frame-relative shutter open/close values DO make sense at
> rendertime (to indicate whether motion blur is center-framed,
> forward-framed, etc.), but aren't camera specific, and only act as a sliding
> window to determine which samples to use, within the Alembic file, for
> rendering purposes. Typically this is something set per show, and optionally
> tweaked at render-time within a lighting package. I vote for leaving this
> out of the camera definition.

I see why you say that, but to me it would be nice when sharing
cameras amongst tools to have this automatically come through. As the
tools get more sophisticated we're seeing, for instance, real-time
motion blur preview in animation tools. I think a shutterOpen and
shutterClose attribute would be pretty useful.

> * ortho cameras: support for ortho cameras would be nice, but should be a
> separate 'OrthoCamera' object type (instead of adding a boolean +
> 'orthoWidth' parameter to the perspective camera)

I agree, a separate type. (or have orthoCamera and perspectiveCamera
both be subclasses of Camera).

Jonathan Gibbs

unread,
Mar 10, 2011, 1:55:59 PM3/10/11
to alembic-d...@googlegroups.com, Moritz Moeller
+1

Jonathan Gibbs

unread,
Mar 10, 2011, 1:59:59 PM3/10/11
to alembic-d...@googlegroups.com, Colin Doncaster
On Thu, Mar 10, 2011 at 3:33 AM, Colin Doncaster
<colin.d...@gmail.com> wrote:
> I would have thought degrees be best as it's an already understood language.  ShutterOpen/Close still feels very renderman influenced and I'm assuming the camera information is supposed to be generic?  It feel like I might be the only one pushing for this though.

The time values to me seem like just encoding the actual important
data, rather than the mechanism by which current (film) cameras do
that.

Of course, this argument could also be applied to just storing FOV
rather than focal-length/aperture-size pairs, which I don't want to
do! :)

> -1,0,+1 seems like a good way of describing frame centre, this still gives one the option ( if so desired ) to actually set it to -2 and have a 720 degree shutter angle for some odd but interesting effects ( and to that point, the format shouldn't impose restrictions ).  :)

Setting shutterOpen to -2 and shutterClose to 0 (which I thik matches
-2 and 720 degrees) should be legal.

--jono

Francois Chardavoine

unread,
Mar 10, 2011, 4:20:23 PM3/10/11
to alembic-d...@googlegroups.com, Jonathan Gibbs, Joe Ardent
Jonathan Gibbs wrote, On 03/10/11 10:53:

> I'm surprised to not see any DOF-related attributes aside from fStop.
> What about the focus distance?

Quick answer on this -
Correct, I think the focusDistance needs to be in there. That was
accidentally left off when trying to prune out unwanted parameters,
since it doesn't directly factor into the frustum calculation. But it
should definitely track through, whether it's used at 3D rendertime, or
as part of a 2D comp DOF operation.

focusDistance should be all that's needed, though. The near/far DOF
values can be derived from the focalLength, focusDistance and hyperfocal
distance, which itself is derived from the focalLength, fStop and
'circle of confusion' (cc). The cc is in theory dependent on print size
and how far the viewer is from the final image (at least that's my
understanding). In practice, I think filmbackDiagonal/1440 is often used
(which amounts to 0.03mm for a 35mm camera).

Full disclosure of equations here :)
http://www.dofmaster.com/equations.html

F.

Rob Bredow

unread,
Mar 11, 2011, 12:28:11 AM3/11/11
to alembic-discussion
> In the spirit of Rob's comments, I agree that less is more here. There
> should be the fewest parameters required to describe the camera.

I held that opinion when we started this project as well...but I lost
my conviction on it after a few discussions about it. One example: we
often do 2d animation on top of our matchmoved 3d cameras. Of course,
this can be baked into a single projection node (even a projection
matrix for that matter), but it's very often useful to be able to know
the 2d movement separate from the 3d matchmove further down the
pipeline. That's how it's worked at Imageworks for years and I didn't
see a reason to try eliminate that as long as the description was well
understood and didn’t include too many different options (hence me
arguing to kill filmFit).

> So, for instance, I suggest removing "overscan". You get the same
> result by changing the filmApertures.

Yes, but would it be reasonable to agree to leave overscan as part of
the actual camera definition for people who want to use it their
pipeline, but then provide a convenience function in Alembic that
allows you to extract the "simplest possible representation" of the
camera for use in a renderer or app that doesn't need the extra
information? That's how I propose we treat the 2d parameters in
Alembic--these parameters map neatly between Maya and Houdini (and
perhaps other 3d apps as well), but you'd extract the simple camera
for renderman.

The advantage to that choice (if we can settle on the "fairly simple"
camera that many rich 3d apps will want to support) is that we'll get
uniform support for 3d and 2d camera controls across apps. If we
actually bake it down (and use extra channels to pass along the
redundant data) I think we'll see a collection of different camera
styles out there right away--something I'd like to avoid if possible.

> I also agree that it would be nice to stick to "mm" for the film
> apertures so it's in the same units as the focal length.

I agree that the units for aperture and focal length should be
millimeters. However, I believe it works out that as long as you use
the same units for both focal length and aperture the units don't
actually matter. Right? In either case, default values should be
expressed in mm's for focalLength and apertures.

Thanks Jon for the thoughts--

Rob




Jonathan Gibbs

unread,
Mar 11, 2011, 1:30:36 AM3/11/11
to alembic-d...@googlegroups.com, Francois Chardavoine, Joe Ardent
> focusDistance should be all that's needed, though.

I agree. In cm?

Thanks!

--jono

Jonathan Gibbs

unread,
Mar 11, 2011, 1:38:20 AM3/11/11
to alembic-d...@googlegroups.com, Rob Bredow
On Thu, Mar 10, 2011 at 9:28 PM, Rob Bredow <rob.b...@gmail.com> wrote:
> Yes, but would it be reasonable to agree to leave overscan as part of
> the actual camera definition for people who want to use it their
> pipeline, but then provide a convenience function in Alembic that
> allows you to extract the "simplest possible representation" of the
> camera for use in a renderer or app that doesn't need the extra
> information?

Sure, that might work well. Is there a plan for the library to also
just give you the projection matrix? (Left-handed?)

We generally don't use the overscan in Maya, since we tend to overscan
different amounts horizontally and vertically.

I think as long as the precise meaning of any extra parameters are
clearly defined (so we're not trying to reverse engineer it as we
often have to do), some redundancy isn't going to hurt.

> I agree that the units for aperture and focal length should be
> millimeters. However, I believe it works out that as long as you use
> the same units for both focal length and aperture the units don't
> actually matter. Right?

Yes, for the projection matrix it's only the ratio between them which
matters, so the units cancel.

For DOF, I can't recall...

--jono

Jonathan Litt

unread,
Mar 11, 2011, 2:55:58 AM3/11/11
to alembic-d...@googlegroups.com
Hi all!

I'll send one consolidated response to a bunch of previous messages.

About units, I'd vote for putting units directly into property/parameter names. For example "horizontalApertureMM" instead of "horizontalAperture". That, or dragging along a second property to to describe the units of any unit-based property. For example "horizontalAperture" and "horizontalApertureUnits", where the latter is ideally an enum type if Alembic has that. It's one thing for apps like Maya to rely on internal conventions, but Alembic is meant to be an interchange format and shouldn't rely on out-of-band conventions to prevent misinterpretation of units.

I also agree with the sentiment to drop "filmFit" and "filmFitOffset" because those only make sense when taking into account a rendering resolution, and resolution isn't part of the camera definition nor should it be. Plus it's not always applicable. For example in prman there's no such thing as Maya's film fit -- whatever you specify as the camera will be scaled to fit in the resolution even if non-uniformly.

Also I think it's a slippery slope to try to chase down all of Maya's attributes in order to preserve a camera across export/import. It's really all or nothing if you're gonna go that route. For example Maya has an "Auto Clipping Planes" option, so should that be included? It would be incorrect to export a camera without it and then re-import it with whatever numbers happened to be set as the clipping planes. In the general case I'd rather that Alembic not be concerned with tacking on extra properties just because it would be convenient for use in Maya.


On Mar 8, 2011, at 2:22 PM, Joe Ardent wrote:
>> cameraScale seems very generically named, if it's scaling the focal length could/should it be focalScale?
>
> This is an excellent point, and I think we should take it.

If this property is meant to mimic the one in Maya, then that's the inverse of what it is. Setting cameraScale to 2 in Maya would be the equivalent of dividing the focal length by 2, not multiplying it. If anything it would be apertureScale. (I believe Francois got this right in his equations.)

On Mar 10, 2011, at 9:28 PM, Rob Bredow wrote:
>> So, for instance, I suggest removing "overscan". You get the same
>> result by changing the filmApertures.
>

> Yes, but would it be reasonable to agree to leave overscan as part of
> the actual camera definition for people who want to use it their
> pipeline, but then provide a convenience function in Alembic that
> allows you to extract the "simplest possible representation" of the
> camera for use in a renderer or app that doesn't need the extra

> information? That's how I propose we treat the 2d parameters in
> Alembic--these parameters map neatly between Maya and Houdini (and
> perhaps other 3d apps as well), but you'd extract the simple camera
> for renderman.

One good reason for there to a true overscan property different from "cameraScale" -- for apps or renderers that support it, it could be used to write exr files with proper image overscan (data windows larger than their display windows). I also agree with Andy that a true rendering overscan property should be separated into horizontal and vertical amounts.

However this property should not map to Maya's "overscan" attribute, which is used only in the viewport and not for batch rendering. If this Alembic property were meant to be used during rendering then it would not be correct for it to be stuffed into Maya's "overscan" attribute when importing an Alembic camera. I suppose Alembic could have something called "interactiveViewerOverscan" or something like that if it were really desired.

My $.02 and a half!

-Jonathan


Moritz Moeller

unread,
Mar 11, 2011, 10:06:45 AM3/11/11
to alembic-discussion
On Mar 11, 5:28 am, Rob Bredow <rob.bre...@gmail.com> wrote:
> I held that opinion when we started this project as well...but I lost
> my conviction on it after a few discussions about it. One example: we
> often do 2d animation on top of our matchmoved 3d cameras. Of course,
> this can be baked into a single projection node (even a projection
> matrix for that matter), but it's very often useful to be able to know
> the 2d movement separate from the 3d matchmove further down the
> pipeline. That's how it's worked at Imageworks for years and I didn't
> see a reason to try eliminate that as long as the description was well
> understood and didn’t include too many different options (hence me
> arguing to kill filmFit).

+1 for a separate 3x3 matrix to describe a post-projection 2D xform
and a 2D box to describe crop.

I still find the RenderMan spec's camera description with the separate
projection, the possibility to put a 2D xform on the stack beforehand
and the crop window to be one of the most concise yet flexible ones.

The thing is that a lot of brains and money went into the RenderMan
spec. Why not just adapt the parts of this that apply to Alembic?


.mm

Peter Shinners

unread,
Mar 11, 2011, 11:32:34 AM3/11/11
to alembic-d...@googlegroups.com
On 03/11/2011 07:06 AM, Moritz Moeller wrote:
> On Mar 11, 5:28 am, Rob Bredow<rob.bre...@gmail.com> wrote:
>> I held that opinion when we started this project as well...but I lost
>> my conviction on it after a few discussions about it. One example: we
>> often do 2d animation on top of our matchmoved 3d cameras. Of course,
>> this can be baked into a single projection node (even a projection
>> matrix for that matter), but it's very often useful to be able to know
>> the 2d movement separate from the 3d matchmove further down the
>> pipeline. That's how it's worked at Imageworks for years and I didn't
>> see a reason to try eliminate that as long as the description was well
>> understood and didn�t include too many different options (hence me

>> arguing to kill filmFit).
> +1 for a separate 3x3 matrix to describe a post-projection 2D xform
> and a 2D box to describe crop.
>
> I still find the RenderMan spec's camera description with the separate
> projection, the possibility to put a 2D xform on the stack beforehand
> and the crop window to be one of the most concise yet flexible ones.
>
> The thing is that a lot of brains and money went into the RenderMan
> spec. Why not just adapt the parts of this that apply to Alembic?
>
>
> .mm
>


Adding to the Renderman appreciation thread;

I'm usually a fan of Fov instead of Aperture and Distance. This ends up
making the units decision a simple matter of radians vs degrees. Also
consider the day a spot light schema will be added, built around cone
angle. Building projection from both would be more consistent.
+0 for Fov.

It's important to me that the Alembic cameras transfer between
applications. I believe most do not support a 3x3 post transform. About
a year ago I did a survey of the camera definition used by several
applications. I'll try to dig that up.
-1 for post projection matrix.

Francois Chardavoine

unread,
Mar 11, 2011, 12:52:32 PM3/11/11
to alembic-d...@googlegroups.com, Peter Shinners
Peter Shinners wrote, On 03/11/11 08:32:
On 03/11/2011 07:06 AM, Moritz Moeller wrote:
  
On Mar 11, 5:28 am, Rob Bredow<rob.bre...@gmail.com>  wrote:
    
I held that opinion when we started this project as well...but I lost
my conviction on it after a few discussions about it. One example: we
often do 2d animation on top of our matchmoved 3d cameras. Of course,
this can be baked into a single projection node (even a projection
matrix for that matter), but it's very often useful to be able to know
the 2d movement separate from the 3d matchmove further down the
pipeline. That's how it's worked at Imageworks for years and I didn't
see a reason to try eliminate that as long as the description was well
understood and didn’t include too many different options (hence me
arguing to kill filmFit).
      
+1 for a separate 3x3 matrix to describe a post-projection 2D xform
and a 2D box to describe crop.

I still find the RenderMan spec's camera description with the separate
projection, the possibility to put a 2D xform on the stack beforehand
and the crop window to be one of the most concise yet flexible ones.

The thing is that a lot of brains and money went into the RenderMan
spec. Why not just adapt the parts of this that apply to Alembic?


.mm

    

Adding to the Renderman appreciation thread;

I'm usually a fan of Fov instead of Aperture and Distance. This ends up 
making the units decision a simple matter of radians vs degrees. Also 
consider the day a spot light schema will be added, built around cone 
angle. Building projection from both would be more consistent.
+0 for Fov.

It's important to me that the Alembic cameras transfer between 
applications. I believe most do not support a 3x3 post transform. About 
a year ago I did a survey of the camera definition used by several 
applications. I'll try to dig that up.
-1 for post projection matrix.
  

A couple things we probably didn't make clear -
- no matter what the camera parameter set ends up being, the camera class would always provide the expected prman-style methods of:
   * getFOV()
   * getScreenWindow()
- assuming the Alembic camera isn't defined strictly with those, there would also be utility functions to convert fov+screenwindow values into a defining subset of the properties the abc camera would use, for the rare case when your source application (that you're exporting the camera from) or usage expresses the camera in those terms.

Additionally, even for renderman, you still want to preserve fStop, focalLength and focalDistance for cases when you want to render DOF in the renderer (as opposed to doing it in post).

Given that, what are the consequences of providing a more expressive set of properties on the camera? It wouldn't prevent you from doing anything, but it does enable additional information to be passed through, and optionally restored to application which understand them. For applications that don't, the baked down result of the parameter contribution would be used instead, with no loss of information.

I don't think I agree with the "lowest common denominator" approach: why lose information on 'write' as opposed to on 'read'? Something like a 2D post transform is information we want passed down anyway (for usage in Nuke for instance). If it's not part of the native camera properties, it simply means we'll be adding custom properties as meta-data and extracting it with custom pipeline tools (which wouldn't be exchangeable between studios), as opposed to having native vendor support for them whenever possible.

That being said, internally I know we can accommodate any solution.

Specifically with respect to the post-projection operations, we had initially thought of a couple methods of storing that, but eventually kept it out from the first phase of the discussion to see how much infighting there would be :) Here's the three possibilities we'd brought up (I'm mentioning them as FYI, not as any kind of endorsement!).

Option 1: provide a 3x3 2D affine matrix to express the post-projection operations

Option 2: provide an arbitrary 2D transform stack to express the post-projection operations (translate/rotate/scale) - this allows for preserving the decomposition of what contributes to the motion (match-move, precomp, pan/tile, shake, etc), and the ability to specify pivots

Option 3: match maya's post-projection attribute, i.e.:
- preScale double (default: 1.0): post-projection matrix pre-scale value
horizontalFilmTranslate/verticalFilmTranslate double (defaults: 0.0, 0.0): The horizontal/vertical film translations for the normalized viewport.
- filmRollValue double (radian, default: 0.0): The amount of rotation to apply to the film back.
- filmRollOrder enum (default: rotate-translate): The order in which to apply the rotation
         with respect to the filmTranslate, rotate-translate or translate-rotate. rotate-
         translate implies that the rotate will occur before the translation, and the
         translate-rotate implies that the translation will occur before the rotation.
- horizontalRollPivot/verticalRollPivot double (defaults: 0.0, 0.0) The 2D point on the projected image to rotate film back around. These values are normalized to the viewing area of the camera.
- postScale double (default: 1.0): the post projection matrix's post-scale value.


Francois.




Andrew D Lyons

unread,
Mar 11, 2011, 7:26:00 PM3/11/11
to alembic-d...@googlegroups.com, Francois Chardavoine, Peter Shinners
It's very difficult to argue that the Nuke/Houdini/Zeno native representations of the camera data should not also be supported as well once you start going down the road of supporting Maya's peculiarities. I think the distinction between - Internal studio asset metadata system - and - software/studio interchange system is a good one to keep. It's kind of the raison d'etre of Alembic - is it not? Option 2 gets my vote.

Cheers



--
You received this message because you are subscribed to the Google
Groups "alembic-discussion" group.
To post to this group, send email to alembic-d...@googlegroups.com
To unsubscribe from this group, send email to
alembic-discuss...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/alembic-discussion?hl=en
 
For RSS or Atom feeds related to Alembic, see:
 
http://groups.google.com/group/alembic-dev/feeds
 
http://groups.google.com/group/alembic-discussion/feeds



--
=======================================
Andrew D Lyons | Digital Artist | http://www.tstex.com
=======================================

Erik.Strauss

unread,
Mar 25, 2011, 7:32:57 PM3/25/11
to alembic-discussion
Hello All,

In case you had not seen it, there is a "gold candidate" Alembic
camera description based on the above feedback. It was posted in a new
thread on this forum.

http://groups.google.com/group/alembic-discussion/browse_thread/thread/187cd22b67ad4ccb

We intend to move quickly towards an implementation of this model so
if you have any additional comments please reply to the above thread
as soon as possible.

Thanks for your interest and support.

-Erik Strauss



On Mar 11, 5:26 pm, Andrew D Lyons <tstext...@gmail.com> wrote:
> It's very difficult to argue that the Nuke/Houdini/Zeno native
> representations of the camera data should not also be supported as well once
> you start going down the road of supporting Maya's peculiarities. I think
> the distinction between - Internal studio asset metadata system - and -
> software/studio interchange system is a good one to keep. It's kind of the
> raison d'etre of Alembic - is it not? Option 2 gets my vote.
>
> Cheers
>
> On 11 March 2011 09:52, Francois Chardavoine <chard...@imageworks.com>wrote:
>
>
>
>
>
>
>
>
>
> >  Peter Shinners wrote, On 03/11/11 08:32:
>
> > On 03/11/2011 07:06 AM, Moritz Moeller wrote:
>
> >  On Mar 11, 5:28 am, Rob Bredow<rob.bre...@gmail.com> <rob.bre...@gmail.com>  wrote:
> > *Option 1*: provide a 3x3 2D affine matrix to express the post-projection
> > operations
>
> > *Option 2*: provide an arbitrary 2D transform stack to express the
> > post-projection operations (translate/rotate/scale) - this allows for
> > preserving the decomposition of what contributes to the motion (match-move,
> > precomp, pan/tile, shake, etc), and the ability to specify pivots
>
> > *Option 3*: match maya's post-projection attribute, i.e.:

katrin

unread,
Jan 22, 2014, 1:02:54 PM1/22/14
to alembic-d...@googlegroups.com, jar...@ilm.com
hi,
just wondering if the  proposed separate ortho cams are implemented
in the current maya exporter already?
The resulting abc data looks pretty much the same to as.
Thanks,
katrin

Lucas Miller

unread,
Jan 22, 2014, 1:05:06 PM1/22/14
to alembic-d...@googlegroups.com, Joe Ardent
The ortho cameras appear to have been accidentally overlooked and have not yet been implemented.

Lucas


--
You received this message because you are subscribed to the Google Groups "alembic-discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to alembic-discuss...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply all
Reply to author
Forward
0 new messages