> Just wondering your thoughts on premultiplication and how it fits into a color managed pipeline. How does OCIO deal with premultiplication?
First off, I'm 'pro' premultiplication. To the best of my
understanding, premultiplication is the natural representation of
pixels with alpha; the big confusing thing about it is the name I
feel the simplest understanding of premultiplication is looking at it
from a rendering perspective. If you're writing a renderer, you ask
yourself "how much energy is being emitted from within the bounds of
this pixel"? Call that rgb. "and, how much of the background does
this pixel occlude?" That's alpha. This is how all renderers work
(prman, arnold, etc) , and its called 'premultiplied'. Note that at no
time does prman have an explicit step that multiplies rgb by alpha,
it's implicit in our definition of 'total pixel energy'. And it's
the right way to do things. A nice article on this topic is Alvy Ray
Smith's 1995 Tech Memo 4, 'Image Compositing Fundamentals':
This definition also leads to a nice intuition for what would
otherwise be corner cases. Consider a pixel where rgb > alpha, such as
2.0, 2.0, 2.0, 1.0. Nothing special about this - it just represents a
'specular' pixel where it's emitting 2.0 units of light, and is fully
opaque. A pixel value of (2.0, 2.0, 2.0, 0.0)? Nothing special
about this either, it represents a pixels that's contributing 2.0
units of light energy, and happens to not occlude objects behind it.
Both of these cases can cause trouble with unpremultiplied
OCIO currently doesnt make any distinction between the two states, it
leaves that to the user. You can hand OCIO either premult or unpremult
pixels, it just applies the color math to rgb. What do I recommend?
There is no standard, and both ways can be useful in practice. This is
why nuke, for example, gives you a checkbox for which order to apply
the lut / premultiplication.
But my perspective from a Sony perspective is to avoid the issue to
the greatest extent possible. If you want to do a non-linear color
encoding, and also transmit alpha information you're pretty much hosed
anyways. There is just no good way to send log data, with alpha, in
such a way where the log comp will produce a result equivalent to the
linear comp. (I've actually done experiments where you munge the
output alphas to minimize alpha errors knowing what the destination
(bg layer) colors will be, and trust me - that's not a battle I ever
want to fight again)).
In our pipeline, we try to only keep alphas around when dealing with
imagery in the original rendered (linear) colorspace. If you want a
log representation, or any device output, we drop the alpha.
For the few outputs where we need to preserve alpha, we fake the best
job we can and just accept the limitations of an LDR comp. The most
common example of this processes is for marketing deliveries, where
they want a display referred tif / png to do a 'desktop publishing'
comp (i.e., photoshop). For these, we do linear exr -> unpremult ->
color conversion from linear to display referred -> store raw
unpremult data in supported 'unassociated' format (tif/png). This
process will allow for nice comps on elements such as hair edges, but
would not be sufficient for large semi-transparent elements such as fx
renders. (OCIO is handed unpremult data).
For implementing image displays, we do not do any special treatment
for alphas. For example, in Katana's image display we take the
premultiplied HDR linear rendered images, and apply the output display
transform to the premultiplied imagery. (OCIO is handed premult data).
This is conceptually equivalent to showing the image comp'd in
linear over black, and in my experience is typically what artists
expect in a compositing / lighting package.
(Hopefully this doesnt start a flame war). ;)