Pixar compositing?

304 views
Skip to first unread message

Jonathan W Hendry

unread,
May 11, 1999, 3:00:00 AM5/11/99
to
Anyone know any specifics about this PIXAR compositing
technology that's turning up in Quartz?

Mike P?

I'm wondering how it compares to what Lawson's always
talking about from GX. If we're lucky, maybe it's as
good or better, and we can all do victory laps (Lawson
included) and Lawson won't have to flog that particular
bit of dead horse anymore ;).


As long as Pixar stuff is coming over, what would be really
cool would be the ability to apply Renderman shaders to
the UI. That would be cool, and possibly fast if
the rendering engine were simplified to just render
a shader on a flat surface, perhaps with fixed or
precalculated values for lighting, etc.

Leave the 3D apps to OpenGL, but shader-based UI themes would
be way cool. It would offer far better bragging rights than
merely tiling a fixed bitmap around.

Especially if events could modify the shader's parameters,
e.g. roughness varying with system load. Or if different
UI widgets could be given different shaders, and/or different
parameters.


Mike Paquette

unread,
May 11, 1999, 3:00:00 AM5/11/99
to
Jonathan W Hendry wrote:
>
> Anyone know any specifics about this PIXAR compositing
> technology that's turning up in Quartz?

The compositing technology may already be familiar to some readers of
this newsgroup. It's the Porter-Duff compositing algebra developed at
PIXAR about 15 years ago (Hi, Tom!).

Porter-Duff compositing uses an alpha channel containing per-pixel
transparency information (an invention of Ed Catmull and Alvy Ray Smith)
to model various photographic image assembly mechanisms, such as one
might use in a film printer assembling a complex special effects scene.

One of the most commonly used compositing modes is
source-over-destination. In this mode, the alpha channel and compositing
logic are used to mimic the effect of a translucent holdout mask and
overprinting operation, permitting new scene elements to be placed atop
one another. This was demonstrated during Monday's WWDC keynote as
part of the 'PDF Playground' and 'Clouds' demos.

>
> I'm wondering how it compares to what Lawson's always
> talking about from GX. If we're lucky, maybe it's as
> good or better, and we can all do victory laps (Lawson
> included) and Lawson won't have to flog that particular
> bit of dead horse anymore ;).

Lawson English won't be happy, as what we are doing doesn't support the
over 4 billion possible bit-twiddling operations that GX can do. <Cue
Lawson; everyone else can kill the thread now>

The good news is that what we do have in Quartz is very fast, and is a
good match to what the image manipulation and prepress crowd have
indicated they want.

I cannot comment on the rest of the article at this time.

Mike Paquette


Lawson English

unread,
May 11, 1999, 3:00:00 AM5/11/99
to
Mike Paquette <mpa...@ncal.verio.com> said:

>Lawson English won't be happy, as what we are doing doesn't support the
>over 4 billion possible bit-twiddling operations that GX can do. <Cue
>Lawson; everyone else can kill the thread now>
>
>The good news is that what we do have in Quartz is very fast, and is a
>good match to what the image manipulation and prepress crowd have
>indicated they want.
>
>I cannot comment on the rest of the article at this time.

Sad. Why?

Because that means that you can't directly support QuickTime color
compositing within an arbitrary Quartz-based application.

Why is it that ONLY Mac developers have seen this as a useful advantage
(see Semper Fi comments from myriad developers *in favor of* this idea,
which was developed independently by several developers besides myself)?


BTW, GX color compositing supports Porter-Duff on a per-color-channel
basis. Why would this be a BAD thing to support?

----------------------------------------------------------------------
Want Apple to license Cyberdog for third-party development? Go to:
<http://www.pcsincnet.com/petition.html>
----------------------------------------------------------------------


Sean Luke

unread,
May 11, 1999, 3:00:00 AM5/11/99
to
Lawson English <eng...@primenet.com> wrote:

> BTW, GX color compositing supports Porter-Duff on a per-color-channel
> basis. Why would this be a BAD thing to support?

Because it's slow?

Sean

Eric King

unread,
May 11, 1999, 3:00:00 AM5/11/99
to
In article <7ha8rb$qha$1...@cronkite.cs.umd.edu>, Sean Luke
<se...@drinkme.cs.umd.edu> wrote:

:Because it's slow?

Not necessarily. By checking the settings in the ink objects of the
source and destination shapes it's possible to see whether a transfer mode
is going to be simple or complex, before any compositing is actually done.
If it's a simple Porter & Duff style composite, then the extra checks for
color-clamping as well as the matrix multiply can be avoided.
I don't know for sure if GX does these checks, but I'd be very
surprised if it didn't. Since it does do checks to make sure that both the
source and destination are in the same color-space and have the same
color-matching info. For simple alpha-channel composites, I've never
noticed much of a speed difference between GX and Classic Quickdraw.
Also remember, Quicktime uses GX Transfer modes in its vector renderer,
and it seems to be able to draw them at a pretty good clip. (Check out
Electrifier and LiveStage.) If an efficient implementation of GX transfer
modes wasn't possible, the Quicktime team, never would have used them.
More info at:
http://developer.apple.com/techpubs/quicktime/qtdevdocs/REF/refVectors.f.htm#39371


::Eric

Lawson English

unread,
May 11, 1999, 3:00:00 AM5/11/99
to
Sean Luke <se...@drinkme.cs.umd.edu> said:


Er, how slow? With today's CPU's, one can load an entire pixel into a
register, manipulate it and save it within the latency period of a memory
access.

Performing GX manipulations could be done using AltiVec without appreciably
slowing down the graphics system.

As I pointed out, GX manipulations are used in QT vectors and many
QTeffects. Are you suggesting that typing a letter requires greater
CPU/system power than doing a transition effect on a full-screen QT MooV?

WillAdams

unread,
May 12, 1999, 3:00:00 AM5/12/99
to
Lawson said:
>>> BTW, GX color compositing supports Porter-Duff on a per-color-channel
>>> basis. Why would this be a BAD thing to support?

to which Sean retorted:
>>Because it's slow?

<SNIP>

It would be a bad thing to support per color compositing because color-matching
would then be handled on too fine a grained basis to allow for overall color
management in the system as a whole.

My company spends $100,000 each year providing our customers with test plates
to fingerprint printing presses and we can ill-afford to have this process
muddied by applications blindly compositing colors in an un-controlled manner.

William

William Adams
http://members.aol.com/willadams
Sphinx of black quartz, judge my vow.


Eric King

unread,
May 12, 1999, 3:00:00 AM5/12/99
to
In article <19990512074640...@ng06.aol.com>, will...@aol.com
(WillAdams) wrote:

:It would be a bad thing to support per color compositing because color-matching


:would then be handled on too fine a grained basis to allow for overall color
:management in the system as a whole.

I don't think you have a very clear idea as to what is going on. In
Porter & Duff compositing, i.e. the kind that's going to be in Mac OS X
and was in DPS, color channel 1 of the source is composited with channel 1
of the destination, channel 2 of the source with channel 2 of the
destination, etc.
GX extends this model by allowing a given color channel of the source
to affect arbitrary channels in the destination. Each channel could also
have its own private compositing mode. (This is what Lawson meant by
per-color compositing...) It also integrated color-space conversion,
color-matching, and color-clamping. Again, take a look at to see what is
going on:
http://developer.apple.com/techpubs/quicktime/qtdevdocs/REF/refVectors.f.htm#39371

Presumably Mac OS X is going to add color-space conversion and
color-matching to the classic Porter & Duff compositing operations, giving
the whole system functionality that you already had in apps like
Photoshop.
The current point of contention is that, Quicktime's vector layer still
uses GX's transfer modes.
(http://developer.apple.com/techpubs/quicktime/qtdevdocs/REF/refVectors.f.htm#39371)
For consistency's sake, one of three things needs to happen:

1. Quicktime's transfer modes need to be 'dumbed down' to the Quartz model.
2. Quartz needs to be upgraded to be able to support the model used by
QuickTime.
3. A new system needs to be put in both that is a superset.

IMO, none of these are likely to happen, I strongly suspect that Apple
will opt to just leave things as is. Considering that they just gave the
Yellow Box the name, Cocoa, which is also the name of an educational
scripting environment that Apple released not too long ago, I don't think
the company is terribly worried about maintaining consistency across its
technologies. In addition, since Apple is now shipping Final Cut, it's
probably not in their best interests to integrate whizzy system-wide
compositing functionality.
Fortunately, this issue isn't a showstopper for Mac OS X, but this
dichotomy will probably bug some developers who want to tightly integrate
Quartz and Quicktime imaging.

:My company spends $100,000 each year providing our customers with test plates


:to fingerprint printing presses and we can ill-afford to have this process
:muddied by applications blindly compositing colors in an un-controlled manner.

It's not uncontrolled. These are *exactly* the same sorts of ops that
Photoshop and After Effects do. What do you think is going on when you use
Photoshop's Layers?

::Eric

Lawson English

unread,
May 12, 1999, 3:00:00 AM5/12/99
to
Eric King <r...@smallandmighty.com> said:

>
>:My company spends $100,000 each year providing our customers with test
>plates
>:to fingerprint printing presses and we can ill-afford to have this
process
>:muddied by applications blindly compositing colors in an un-controlled
>manner.
>
> It's not uncontrolled. These are *exactly* the same sorts of ops that
>Photoshop and After Effects do. What do you think is going on when you use
>Photoshop's Layers?

Apparently the original author is unaware that ColorSynch was first bundled
with GX and later released as a separate product. ColorSynch was *designed*
to handle per-color-channel compositing and keep things coordinated between
devices because it was designed with GX in mind. I doubt if any of the
multi-channel-filter applications such as PhotoShop could make use of
ColorSynch info if hadn't been designed that way.

Jonathan W Hendry

unread,
May 12, 1999, 3:00:00 AM5/12/99
to
Quoth WillAdams on 13 May 1999 00:22:40 GMT in <19990512202240...@ng-fi1.aol.com>:
>I was a bit rushed when I made my first post. To restate what I was trying to
>say, along with the underlying logic, please follow along.
>Quartz (Apple's new imaging model) has the ability to composite complete images
>while QuickDraw/GX allows one to composite individual color channels, I feel
>that the latter is problematic and unneccessary.

Uh, your logic is a little off here.

> If one needs to composite only a single color channel of an image, it's a
> problematic UI issue as to how the user would direct such a thing (I can't
> think of an elegant/consistent way to handle this in a layout application).

Think of Photoshop's channels. Same idea, but finer-grained.

>Moreover, if one has the ability to color separate an image, this is
>unnecessary since one can separate out the color information which one wants
>(color-correcting at this time if the user desires) and then composite that
>(as a complete image) with the other complete image (repeat afore-mentioned
>step if one wishes to blend two separate color channels of two separate images).

That's not very fine grained. The per-channel compositing would allow
*portions* of images to be composited this way: floating selections,
paragraphs of text, etc.

>Those who remember the momentary nightmare of PhotoShop 5s automated
>imposition of color correction on images will appreciate the nicety of
>having this under user volition.

But that's a different issue. Presumably, color separations for
a page would be based on the image that results from the fancy
per-channel compositing.


>I'll admit I'd be interested in seeing prototypes of tools which do
>compositing such as Lawson describes for GX--but I think the UI for
>such a tool would be redundant in terms of specifying color information
>and correction

I think this is the problem, you're thinking of it as a color-correction
tool, on a large-scale basis.

Have you ever used Metacreations' Painter? You can make a selection
in one image, copy it, and paste it in another document where it
becomes a 'floater'. You can adjust the compositing method for
the floater, which changes the appearance depending on the
interaction of the upper and lower images. Presumably, this
uses a direct mapping of channels, R-R, G-G, B-B.

The GX-style compositing would just require a UI for choosing
which channels to use. A little 3-column matrix of checkboxes.
Not an interface stretch.

Lawson English

unread,
May 12, 1999, 3:00:00 AM5/12/99
to
Jonathan W Hendry <jhe...@shrike.depaul.edu> said:

>The GX-style compositing would just require a UI for choosing
>which channels to use. A little 3-column matrix of checkboxes.
>Not an interface stretch.

GX goes a few steps farther if you really want it to.

In addition to providing per-color-channel compositing modes, there's 3 5x4
matrices that can specify how much of each channel is to be used in each
stage for each color channel.

You can also do things like invert the role of the source and destination
pixels, and so on. As soon as my finals are over, I'll tidy up the demo
stack and put GXFCN 0.01a up at <http://www.eondesign.com> and you can play
around with designing your own interface to the Ink object [my first pass
at an interface is ludicrously complex...].

One nice thing with GX is that you can use an ink object as though it were
a filter and share it amongst graphics and text objects, as well as
bitmaps.


BTW, the only reason why GX's solution would be so slow in Quartz that *I*
can think of is that there's no low-level support for it in the graphics
engine. I don't see that as a problem in the *long* run, since Apple could
simply add GX-like support and provide an alternate API to the engine that
would allow one to create Ink-object-like solutions.

The ideal thing would be to provide low-level hooks into the rendering
pipeline so that ANY kind customization could be done. The default would be
the QT-compatible Ink objects but you could override/replace that part of
the rendering engine with your own modifications if you liked.

With the advent of AltiVec in G4 Macs, it would be foolish NOT to provide
these hooks since most GX-style modifications could probably be completed
within the latency period of drawing to the screen -IOW, at least with a G4
Mac, speed simply wouldn't be an issue.

WillAdams

unread,
May 13, 1999, 3:00:00 AM5/13/99
to
I was a bit rushed when I made my first post. To restate what I was trying to
say, along with the underlying logic, please follow along.

Quartz (Apple's new imaging model) has the ability to composite complete images
while QuickDraw/GX allows one to composite individual color channels, I feel
that the latter is problematic and unneccessary.

If one needs to composite only a single color channel of an image, it's a


problematic UI issue as to how the user would direct such a thing (I can't
think of an elegant/consistent way to handle this in a layout application).

Moreover, if one has the ability to color separate an image, this is
unnecessary since one can separate out the color information which one wants
(color-correcting at this time if the user desires) and then composite that (as
a complete image) with the other complete image (repeat afore-mentioned step if
one wishes to blend two separate color channels of two separate images).

Those who remember the momentary nightmare of PhotoShop 5s automated imposition


of color correction on images will appreciate the nicety of having this under
user volition.

I believe that it's more important to have an extent tool which works with
current standards and work practices than an item which is on eternal
back-order and unavailable for use in productive work.

I'll admit I'd be interested in seeing prototypes of tools which do compositing
such as Lawson describes for GX--but I think the UI for such a tool would be

redundant in terms of specifying color information and correction in comparison
to the system which I've described above which would use in place system calls
which could be optimized more thoroughly I think than a separate pair of calls.

Reply all
Reply to author
Forward
0 new messages