Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Real-Time RenderMan?

1 view
Skip to first unread message

Steve Hollasch

unread,
May 12, 2000, 3:00:00 AM5/12/00
to

Anybody doing it? Is it plausible? Given that real-time raytracing is
within reach, what would it take to do real-time RenderMan? How would
raytracing compare against Reyes as the underlying algorithm?

With regards to a real-time raytracing API, the biggest weakness there
is that you'd pretty much be forced to implement a scene graph, while most
of the world vastly prefers an immediate-mode interface for realtime
libraries. In contrast, the Reyes algorithm seems ideally suited to
streamed commands to iteratively build up an image.

Further, what would it take to layer a RenderMan compliant interface
atop either a raytracing or a Reyes renderer in a realtime fashion? I'm
unsure of the additional constraints of RenderMan that may steer it away
from realtime. For example, RenderMan is pretty much script-driven, or at
least not compatible with a C++ program (for example.) Am I mistaken? Has
anybody considered these questions?

(Note: I'm not arguing the quality of RenderMan versus the quality of
raytracing. Indeed, a realtime system may use only a fraction of the power
of either approach. Rather, I'm interested in the plausibility of using
either as a realtime rendering library.)


Alex Magidow

unread,
May 12, 2000, 3:00:00 AM5/12/00
to

Skint wrote:

> ... RenserMan the rendering specification -...
>
>

The renserman specs, eh? No wonder he's confused....sorry, it was a bit
tempting.


--
You Know You've Been Raytracing Too Long When:
You have ever said "I don't need no stinking modelers!!!"
Produced by Alex Magidow's Sig File Randomizer(YKYBRTTL jokes courtesy of
c.g.r.raytracing)

Skint

unread,
May 13, 2000, 3:00:00 AM5/13/00
to
On Fri, 12 May 2000 15:15:30 -0700, "Steve Hollasch"
<stev...@microsoft.com> wrote:

>
> Anybody doing it? Is it plausible? Given that real-time raytracing is
>within reach, what would it take to do real-time RenderMan? How would
>raytracing compare against Reyes as the underlying algorithm?
>
> With regards to a real-time raytracing API, the biggest weakness there
>is that you'd pretty much be forced to implement a scene graph, while most
>of the world vastly prefers an immediate-mode interface for realtime
>libraries. In contrast, the Reyes algorithm seems ideally suited to
>streamed commands to iteratively build up an image.
>
> Further, what would it take to layer a RenderMan compliant interface
>atop either a raytracing or a Reyes renderer in a realtime fashion? I'm
>unsure of the additional constraints of RenderMan that may steer it away
>from realtime. For example, RenderMan is pretty much script-driven, or at
>least not compatible with a C++ program (for example.) Am I mistaken? Has
>anybody considered these questions?
>

It looks like you're confusing Pixar's photorealistic RenderMan (the
rendering application) with RenserMan the rendering specification -
which does have a C implementation. There are RenderMan complient
renderers that use raytracing (BMRT) as well as REYES (PRman) or other
scanline techniques (RenderDot C, Aqsis, Siren etc)

There was a post scouting for comp.graphics.rendering.raytracing about
real time raytracing - I've seen several examples around as java
applets - although with very limitted scenes - it probably won't be
that long coming.

Although real time ray tracing isn't really that necessary - it gives
you real reflections and refractions but not a lot more - the per
pixel shading in real time interests me a lot more.

Simon

Larry Gritz

unread,
May 13, 2000, 3:00:00 AM5/13/00
to
In article <391c...@news.microsoft.com>,

Steve Hollasch <stev...@microsoft.com> wrote:
>
> Anybody doing it?

rgl? Or did you mean full implementation of SL?


>Is it plausible?

For trivial scenes, I suppose.

> Given that real-time raytracing is
>within reach

Is this a joke?

> what would it take to do real-time RenderMan? How would
>raytracing compare against Reyes as the underlying algorithm?

Ray tracing would have horrible coherence properties and would be much
slower and take more memory. For perhaps 1% of interesting scenes,
though, it would have noticably fewer artifacts.


> With regards to a real-time raytracing API, the biggest weakness there
>is that you'd pretty much be forced to implement a scene graph, while most
>of the world vastly prefers an immediate-mode interface for realtime
>libraries. In contrast, the Reyes algorithm seems ideally suited to
>streamed commands to iteratively build up an image.

How so? The Reyes algorithm still builds up the scene graph and only
starts rendering once the entire scene is in the database.


> Further, what would it take to layer a RenderMan compliant interface
>atop either a raytracing or a Reyes renderer in a realtime fashion?

I'm not sure it would take anything in particular, except for the
rendering algorithm to be sufficiently fast. I suppose adding some
functionality for display lists would be good, just so you could
easily communicate what changes and what stays static in a scene.

Of course, the real problem is that, especially with any global
effects, an object that doesn't move can still need to be reshaded on
every frame. And since shading is by far the dominant part of
rendering, it follows that it would be extremely difficult to avoid
redrawing the entirity of the scene for every frame, even if there
were a convenient way to express it.

> For example, RenderMan is pretty much script-driven, or at
>least not compatible with a C++ program (for example.) Am I mistaken? Has
>anybody considered these questions?

Huh? The primary interface is a C language API, much like OpenGL.
RIB is just a metafile that records the sequence of C function calls.

-- lg

--
Larry Gritz Pixar Animation Studios
l...@pixar.com Richmond, CA

Skint

unread,
May 13, 2000, 3:00:00 AM5/13/00
to

Sorry it was 3 in the morning :o)

Should have checked the rendering methods though - oops

On Fri, 12 May 2000 20:43:21 -0500, Alex Magidow <ax...@mninter.net>
wrote:

Bjorke

unread,
May 14, 2000, 3:00:00 AM5/14/00
to bjo...@squareusa.com

> How so? The Reyes algorithm still builds up the scene graph and only
> starts rendering once the entire scene is in the database.

..with the exception of RiProcedural?

---

I've done a bit of (idle!) thinking about which aspects of The Spec are
architecturaly restrictive -- a rendering engine might break the spec
but still do cool things. In a realtime environment, the popular
approach seems to be to hide/rasterize early, shift the shading samples
to align with the raster and then use the display hardware for shading
calculations, rather than dicing the sample according to its 3D
geometry, shading, hiding, and then rasterizing. Does this really BREAK
the spec? ALL of the spec?

Obviously, that sort of pipeline design presents a host of problems for
the implementation of such a render engine (most of them revolving
around antialiasing and texturing quality). Would these problems and
limits likewise be common to *all* such implementations? Could one
isolate a specific dialect of the RenderMan spec that was appropriate
for generalized raster-oriented hardware acceleration? Say, a spec that
maps (most of) RM over OpenGL in a predictable way that could be shared
between manufacturers?

This would be a bit different from EXTENDING the spec (such as adding
new gprims or SL data types), but there's already been precedent for
changing the spec (e.g., the first rev associated ST parameters with the
poly vertex list, rather than the poly topology), and no renderer has
ever really covered all of the spec.

--
kb
LT Supv
Final Fantasy
http://www.finalfantasy.com/


Sent via Deja.com http://www.deja.com/
Before you buy.

Ronald Praver

unread,
May 14, 2000, 3:00:00 AM5/14/00
to
I wanted to add my two cents to this thread.

I am working on two parallel cores right now.
One is a REYES core the other which seems to be
very promising is based on a game engine core.

To explain:
Most game engines today can do from 15 to X frames
a second none video accelerated machines at resolution of
320x200 up. With that in mind if you look at a game engine
core it has many similar concepts as REYES.

1) Zbuffer
2) Rasterizer
etc.

First the ZBuffer must be a multi depth ZBuffer in other words
when the rasterizer finds a z depth closer then one already there
instead of overwriting it you would insert the new value in front of the
current one there. On the other hand if the new z depth is greater
you would insert it in the list at it's correspondent position.
Therefore the Zbuffer is nothing more then a 2 dimensional array
of NULL pointer when we start a frame. Each point is a link list
of Z value or pixel, fleck, micropolygon info. This multi level ZBuffer
facilitates transparency.
Caveat:
The biggest draw back to realtime in the above zbuffer is the speed
at
which on can create then destroy/clear it if one is look for multiple
frame one after the other.

Now where REYES supports parametric surfaces etc. a game engine is
usually optimized for triangles and quads. This at first thought might
seem limiting but if the rasterizer supports implicit surfaces and it
interpolates not only the z depth but normals etc. one can produce a
reasonable meshed object at the pixel level with the same info as
a REYES defined micropolygon!

After the above rasterization the steps that follow in the rendering
pipeline
are the same as REYES.

Each Zbuffer[x][y] that is equal to NULL is defined as background. The
other
state is apply the current shader to the fleck in question.

An earlier problem that I am having with the above is the Deriv() Du()
Dv() but
I think I have a solution for this.

Also the current version of the renderer support directly linked shaders
linked
with the executable or dynamically linked and byte code version.

The were several reasons for the above decision.
direct link and dynamically link shaders run at machine speed and
require a compatible compiler to produce the dynamically linked shaders.

The byte code will always run slower but is portable on any platform.

To open all areas the release I am planning is with source so any route
may
be chosen. The source is generic enough so as not create compiler
barfing.

Alex Magidow

unread,
May 14, 2000, 3:00:00 AM5/14/00
to

Larry Gritz wrote:

> > Given that real-time raytracing is
> >within reach
>
> Is this a joke?
>
>

hey! Real time raytracing is a definite possibility- if the geometry is hard
coded, there are only a few primitives, light sources, and few to no
shiny/reflective surfaces...oh, wait...that can already be done with similar
quality by current game engines(ie, Quake III, unreal, many GeForce using
games.) without raytracing.

>
>
> --
> Larry Gritz Pixar Animation Studios
> l...@pixar.com Richmond, CA

--

Andrew Bromage

unread,
May 15, 2000, 3:00:00 AM5/15/00
to
G'day all.

"Steve Hollasch" <stev...@microsoft.com> writes:

> Anybody doing it? Is it plausible?

One of the original visions for the RenderMan API was as a real-time 3D
API. It has now been all but superseded by SGI's GL, but if you can
get your hands on a NeXT cube, you can still find Quick RenderMan(tm).
It lives on in projects like the GNU 3DKit for GNUstep. Check out their
web site to see what they're up to:

http://www.gnustep.org/DeveloperSuite/G3DKit/G3DKit.html

Cheers,
Andrew Bromage

Stephen H. Westin

unread,
May 15, 2000, 3:00:00 AM5/15/00
to
bro...@goaway.cc.monash.edu.au (Andrew Bromage) writes:

Well, qrman never implemented the shading language, which I think was
implicitly included in Steve's question.

You are, I think, right in thinking that real-time RenderMan was in
the original vision; the "Afterword" in ARM makes it clear that the
hardware project was in fact the horse pulling the RenderMan
cart. This is consistent with a phone conversation I had with Ed
Catmull back around 1984, I think, when (then) Lucasfilm was planning
a 3D rendering machine as a follow-on to the Pixar Image
Computer. Pixar bailed out of the hardware business, and the REYES
machine was stillborn.

That said, real-time RenderMan would be a challenge, notably because
you can't really guarantee or predict frame time *and* give arbitrary
programmable functionality to the user. There are probably RenderMan
animations that could be done in real time on properly designed
hardware, but as Tom Duff is so eager to point out, there are frames
that take hours to render, and we can't hope to get enough speedup
from custom hardware to make those real-time frames for a long
time. And when we do get there, folks will have written shaders that
slow down that hardware to hours per frame.

--
-Stephen H. Westin
Any information or opinions in this message are mine: they do not
represent the position of Cornell University or any of its sponsors.

Alex Magidow

unread,
May 15, 2000, 3:00:00 AM5/15/00
to
I think that with current technologies, real time, programmably shading images
within reason aren't too difficult. Look at the recent GeForce 2- its pretty
good, even has shading. The biggest argument I hear against real time "toy
story" level graphics is the number of polygons/nurb patches/whatever...well,
if you tone them down, as has been done in computer games for the last 10
years, then you'll probably be able to do something close to realtime, almost
Renderman level rendering- thats not to say it'll look like a movie, but it'll
look damn good. If you have the bandwidth, look at some of the geforce2 tech
demo's- the lighting is pretty amazing, and the shadings pretty good as well.
I think that if you put some sort of cap on the amount of shading
calculations(or one chosen simply by logistics) you could certainly have a
fairly high quality renderer in real time. And you'd have to have a lot of
hardware.

"Stephen H. Westin" wrote:

--

Steve Hollasch

unread,
May 15, 2000, 3:00:00 AM5/15/00
to

Thanks for all the responses. Here are some replies, in no particular
order.

First, real-time raytracing is definitely not a joke (having seen a
number of fairly simple demonstrations to that effect.) See
<http://www.acm.org/tog/resources/RTNews/demos/overview.htm> for a nice
TOG-sponsored collection of various explorations in this area.
Nevertheless, I believe that the Reyes algorithm (or a suitable variant)
would be much better suited for an approach that tackles both quality and
speed, particularly when you can specify "micropolygons" that cover, say,
twenty pixels rather than 1/4 of a pixel (with appropriate modifications to
the algorithm.)

Secondly, I certainly understand the distinctions between prman, BMRT,
RenderMan, Reyes, raytracing, standard scanline, and so forth. I've
provided some additional motivation for my question at the bottom of this
message.

Larry - you assert that the Reyes algorithm needs a scene graph, which
surprises me. Based on my limited understanding, I thought that objects
could be supplied to a Reyes-style renderer in an immediate-mode fashion.
That is, I though that I wouldn't need an object until I need to render it,
and won't need it again after I'm finished with it. Why is this not so? Is
non-sequential access a hard or soft requirement of the Reyes renderer?

Andrew -- thanks for the pointer to Quick RenderMan.


----- Motivation -----

The 3D rendering world is divided into two widely-separated camps. One
camp renders high-quality images (I believe the term "photorealistic" is
misleading) at a rate of about 4k seconds per image. For these folks image
quality is everything, and an image of a handful of Lambertian objects
floating in space is considered trivial or uninteresting.

The other camp is comprised of folks who insist on a maximum rendering
time of about 33ms per image. For these folks, image quality is an obstacle
on the way to faster rendering times. Talk to them about the huge sacrifice
in quality they pay and they look at you like you're crazy (in my experience
the high-quality camp readily acknowledges the limitations of their
rendering speed, while the high-speed camp is often strikingly blind to the
limitations of their rendering quality.)

Today, there's just not much that lies between these two worlds. I'm
convinced that there is a whole lot of interesting work that can be done in
the range of 125ms to 8s rendering times. Further, I believe that people
who develop realtime renderers have algorithmic blinders on in the sense
that their ultimate goal is often a hardware implementation -- a goal that
obviates a large number of approaches on the way to a silicon pipe (compiled
realtime shaders being one simple example.) In fact, I hope that before too
long, an extensible rendering interface (ala RenderMan) can render images
fast enough to be adequate for speed, and of a quality that will embarrass
the 3D hardware camp. (Yes, I am aware of the "per-pixel" hardware shader
work, but am unconvinced of its quality based on what I've seen. It's still
an *extremely* limited sandbox, and I don't see the boundaries changing
soon.)

Additionally, I wonder how broad in latitude a rendering architecture
could be that would accommodate, according to user settings, a wide range in
quality/speed. A lot of high-quality renderers can certainly be set to
render very low quality images, but the results are rarely amenable to
realtime rendering. Similarly, I can crank up all of the quality settings I
want in my realtime renderer, but I don't consider these images particularly
wondrous (though marketing descriptions assure us the image quality *kicks
ass*.)


Larry Gritz

unread,
May 15, 2000, 3:00:00 AM5/15/00
to
In article <392063c6$1...@news.microsoft.com>,

Steve Hollasch <stev...@microsoft.com> wrote:
> First, real-time raytracing is definitely not a joke (having seen a
>number of fairly simple demonstrations to that effect.)

Well, I didn't mean that it was literally a joke. I meant that all
such demonstrations are fairly simple.

Besides, assuming you could raytrace a moderately complex scene in
real time, it would behoove you to use a more efficient algorithm to
render a hugely complex scene in real time.


> Larry - you assert that the Reyes algorithm needs a scene graph,

Certainly PRMan's implementation uses a scene graph. Well, not a
sophisticated one, but it certainly does batch primitives up until
it's read the entire scene in. (Exception noted: procedurals occupy a
space in the scene graph, but aren't "expanded" until they are
absolutely needed.) Reyes could be implemented purely in immediate
mode, provided that you had enough memory that you didn't need any
kind of bucketing (i.e., a couple Gb). But by building the scene
graph and processing in depth order, thereby eliminating almost all
shading of occluded objects, we have sped up the algorithm by at least
an order of magnitude. That's a big enough difference that I'd
consider it essential. YMMV.


>That is, I though that I wouldn't need an object until I need to render it,
>and won't need it again after I'm finished with it. Why is this not so?

That is so, but as I said, multiple orders of magnitude can be saved
in both time and memory if the database is sorted in various ways first,
as opposed to processing primitives in the order that they are presented
to the interface.

But in any case, Andrew is quite right. The RenderMan Interface was
originally designed with hardware in mind, and at least two
implementations have used it for real-time purposes (QRMan on the
NeXT, and also rgl), albeit without programmable shading. In short, I
think that's plenty proof of concept that the API itself is adequate.
As for SL, I tend to think that hardware will adapt itself to it one
of these days (cf. the very wonderful SGI paper in this year's
SIGGRAPH). I personally don't see the point of crippling the language
to the point where it would be implementable on today's hardware, but
wouldn't capture much of the richness that we've come to know and
love. But that's just my bias.

- lg

Steve Hollasch

unread,
May 15, 2000, 3:00:00 AM5/15/00
to

"Larry Gritz" <l...@pixar.com> wrote:
<< Certainly PRMan's implementation uses a scene graph. Well, not a
sophisticated one, but it certainly does batch primitives up until it's read
the entire scene in. (Exception noted: procedurals occupy a space in the
scene graph, but aren't "expanded" until they are absolutely needed.) Reyes
could be implemented purely in immediate mode, provided that you had enough
memory that you didn't need any
kind of bucketing (i.e., a couple Gb). But by building the scene graph and
processing in depth order, thereby eliminating almost all shading of
occluded objects, we have sped up the algorithm by at least an order of
magnitude. >>

Ah, I see what you mean. Most programs would indeed have a scene graph
inside them, but it doesn't need to be done inside the graphics library.
For example, OpenGL is very ammenable to and benefits a great deal from
well-designed scene graphs, yet it does not need to implement one itself.
Put another way, I'd guess that, while prman may have scene management code,
the rendering module itself does not -- it just digests the groomed output
from the graph traversal (more or less.)

The reason this is an important point is that realtime folks tend to be
very picky about implementing their own scene graph atop a strictly
immediate-mode low-level (OpenGL or Direct3D being today's two most popular
choices.) When a graphics library requires use of its own scene graph, it
tends to die in the real world (e.g. Dore' PexLib, Performer, Inventor,
Direct3DRM, DirectAnimation, Fahrenheit.) It's not a question of *whether*
a scene graph exists, but *where*.

Regarding hardware implementation, I believe that there are many
drawbacks to shipping this stuff over a wire onto a specialized card for
processing; problems that I've spent many years banging my head against.
Part of the reason for my questions in this thread is the fact that I
believe we've thrown far too much quality down the drain in order to get a
few more frames per second through hardware, and that by designing software
renderers with an eye to hardware implementation, we've forfeited a wide
variety of optimizations that only general purpose hardware can tackle well.
Aside from speed, 3D hardware offers us nothing over general-purpose
hardware, and precludes things like user-extendible cameras, media,
surfaces, shaders, lights, and so forth, on today's platforms. It also
means that your code works only on x% of the platforms out there, since the
common denominator graphics card will always lag by many years.

Marc Olano

unread,
May 16, 2000, 3:00:00 AM5/16/00
to
Well, having had a big part in two interactive shading systems and one
RenderMan implementation, I know I still think we're a few years away
from "interactive Toy-Story". You have to scale back on both the scene
complexity and shaders to get something that you can render
interactively today.

In the work we're doing at SGI, we treat an OpenGL rendering pass as a
sort of SIMD instruction. Your shader code is compiled into a set of
rendering passes. This does give you interactive performance for
shading, which is exceptionally cool and a hell of a lot better than
Gouraud shaded textured polygons. Yet it's not Toy Story.

The two biggest limitations with using multi-pass OpenGL for
interactive shading are the limited range and precision and the
inability of most current hardware to do texture lookups from computed
results in the frame buffer. We've added color range and pixel texture
extensions to a pure-software OpenGL renderer. Based on that software
renderer, we were able to do a pretty complete RenderMan 3.7. Even if
hardware existed with those two extensions, we'd have an accelerated
RenderMan, but we still wouldn't have interactive Toy Story. You might
be able to do simple shaders interactively, but your several hundred
line shader is going to turn into several hundred rendering passes.

As for the scene graph question, all of the shading projects that I've
been a part of have retained data at one point or another. PixelFlow
converted the pseudo-immediate-mode stream into a retained internal
representation (though that had more to do with the 128x64 region size
handled by each node).

On the other hand, an explicit scene graph plays a big part in the
stuff here at SGI. Since we translate the shader into multiple OpenGL
rendering passes, we may end up drawing each object several times. A
scene graph is perfect for that task.

Is retained data necessary for these two systems? Is a retained-mode
interface necessary?

Anyway, there was a SIGGRAPH paper two years ago on the PixelFlow
shading, and there'll a SIGGRAPH paper this year on the SGI shading.
There's a lot more details on both systems in those papers. Both can
be found at http://reality.sgi.com/olano/papers

Marc Olano

Skint

unread,
May 16, 2000, 3:00:00 AM5/16/00
to

Is there any way at all to add a switch to rgl to do some of this or
is shading in OpenGL still a grey area? It would make rgl a hell of a
lot more useful if it even supported a very cut down version of SL
(and defaulting to Gouraud shaded polygon of the underlying colour for
everything else) - texture mapping and step functions woul be nice to
include to give the general look of the final shader.

I'm guessing this is a bit like a gaming engine for RenderMan.

In fact would it be possible to write a program to convert the RIB
files into a Quake 3 pak file and convert the shader into a Quake 3
shader?
http://dl.fileplanet.com/dl/dl.asp?q2pmp/Q3AShader_manual_pdf.zip

Obviously this would mean buying/licensing the Quake 3 engine or
creating a prioritory one that already knows RIB/SL (rgl is half
there!) but if it is possible it would be a very useful tool.

I don't think UnrealScript supports shaders to the same degree as
Quake - I haven't looked into it thoroughly enough, but the conversion
would probably be easier as it is object orientated and (very very
very loosely) based on Java.

I'm guessing tesselating out parametric objects and the possible
texture mapping headaches would be the main problems to overcome.

This may not make much sense as I'm writing it is I think it :o)

Simon

On 16 May 2000 01:56:56 GMT, ol...@helasco.engr.sgi.com (Marc Olano)
wrote:

Andrew Bromage

unread,
May 16, 2000, 3:00:00 AM5/16/00
to
G'day all.

"Steve Hollasch" <stev...@microsoft.com> writes:

>Put another way, I'd guess that, while prman may have scene management code,
>the rendering module itself does not -- it just digests the groomed output
>from the graph traversal (more or less.)

More or less. As I understand PRMan's implementation, the scene graph
and "rasterising" systems are deeply intertwined. For example, if the
rasteriser decides it wants to subdivide a surface, the resulting bits
are inserted back into the "scene graph", so PRMan's scene graph is a
bit more dynamic than real-time programmers are used to, since it can
change during the process of rendering.

> The reason this is an important point is that realtime folks tend to be
>very picky about implementing their own scene graph atop a strictly
>immediate-mode low-level (OpenGL or Direct3D being today's two most popular
>choices.) When a graphics library requires use of its own scene graph, it
>tends to die in the real world (e.g. Dore' PexLib, Performer, Inventor,
>Direct3DRM, DirectAnimation, Fahrenheit.) It's not a question of *whether*
>a scene graph exists, but *where*.

Right. With RenderMan, the API/protocol does not impose a scene graph
on the scene. Both the bits "higher up" and the bits "lower down" can
build whatever graphs they think is most suitable for the job that they
have to do.

> Regarding hardware implementation, I believe that there are many
>drawbacks to shipping this stuff over a wire onto a specialized card for
>processing; problems that I've spent many years banging my head against.

3D hardware will no doubt end up with the same fate as the specialised
LISP machines of years gone by, as soon as we have lots of video
bandwidth in commodity form.

Cheers,
Andrew Bromage

Bjorke

unread,
May 16, 2000, 3:00:00 AM5/16/00
to bjo...@squareusa.com
In article <392063c6$1...@news.microsoft.com>,
"Steve Hollasch" <stev...@microsoft.com> wrote:
>I'm convinced that there is a whole lot of interesting work
> that can be done in
> the range of 125ms to 8s rendering times.

Interesting, yes. Economically motivated?

I compare the realtime and quality camps to the worlds of live video and
film-based movies. Video is a lot cheaper, and it has a lot of
immediacy. But film remains worth the trouble and expense for the
higher-end productions -- even those delivered exclusively on video.

Real-time provides a "live" experience, but beyond that the range of
worthwhile render times shoots up dramatically because people CAN go
longer without significant economic penalty. If you have a 30-second
spot, you have at least a couple of weeks to deliver those 720 frames.
Why not spend an hour on each? Even a 22-minute 30fps Saturday morning
show with seven days to render could be done at around 3-4 minutes per
frame on a single CPU, so why not buy 10 CPUs and let the render times
run up to 20 minutes and still have overhead for re-dos?

Stephen H. Westin

unread,
May 16, 2000, 3:00:00 AM5/16/00
to
Bjorke <bjo...@botzilla.com> writes:

> In article <392063c6$1...@news.microsoft.com>,
> "Steve Hollasch" <stev...@microsoft.com> wrote:
> >I'm convinced that there is a whole lot of interesting work
> > that can be done in
> > the range of 125ms to 8s rendering times.
>
> Interesting, yes. Economically motivated?

<snip>

> Real-time provides a "live" experience, but beyond that the range of
> worthwhile render times shoots up dramatically because people CAN go
> longer without significant economic penalty. If you have a 30-second
> spot, you have at least a couple of weeks to deliver those 720 frames.
> Why not spend an hour on each? Even a 22-minute 30fps Saturday morning
> show with seven days to render could be done at around 3-4 minutes per
> frame on a single CPU, so why not buy 10 CPUs and let the render times
> run up to 20 minutes and still have overhead for re-dos?

<grumble>

Because there are other purposes for rendering than entertainment,
that's why. And, for example, someone designing a car would like to
see high-quality visual feedback in less than 20 minutes. Real time
would be great, but often you really want something better in visual
quality. And with a hundred designers in the same building, there's a
strong incentive not to give each one 10 CPU's, even if you don't
worry about the problems of using them all on a single frame.

As for economic motivation, there is actually money spent on other
things besides entertainment. More money, in fact: $200M is a really
expensive movie, but incredibly cheap for a vehicle program, which
would probably be soemwhere the other side of $1bn. And there are lots
of industries out there designing things: home appliances, electronic
equipment, architecture, ...

</grumble>

Stephen H. Westin

unread,
May 16, 2000, 3:00:00 AM5/16/00
to
sibunks@hotmaildotcom. (Skint) writes:

<snip>

> Obviously this would mean buying/licensing the Quake 3 engine or
> creating a prioritory one that already knows RIB/SL (rgl is half
> there!)

I think that's a very optimistic statement. Rgl is able to take
geometry and convert to entities (polygons) that OpenGL can
handle. That's actually a relatively simple task, given the fairly
obvious mapping involved. Mapping shader language to the OpenGL
equivalent is a lot harder; otherwise the SGI paper wouldn't get into
SIGGRAPH.

Is it conceivable? Yes. Is it simple? No.

> but if it is possible it would be a very useful tool.

<snip>

Mark VandeWettering

unread,
May 16, 2000, 3:00:00 AM5/16/00
to
Steve Hollasch wrote:

> First, real-time raytracing is definitely not a joke (having seen a

> number of fairly simple demonstrations to that effect.) See
> <http://www.acm.org/tog/resources/RTNews/demos/overview.htm>...

While not a joke, neither are these (admittedly very spiffy) demos what we would
call "production" raytracing engines. They often trace very specific scenes,
with very limited number of primitives, and often use tricks which aren't generally
applicable to more complex scenes.

The first computer I ever ran a raytracer on was a Vax 11/750, with less than 1 MIP,
and WAY less than 1 MFLOP. My $600 home machines is literally hundreds, if not thousands
of times faster. But in many ways we are just as far away from real time raytracing as before,
because we place greater demands on rendering software than we ever would have attempted
before.

> Larry - you assert that the Reyes algorithm needs a scene graph, which
> surprises me. Based on my limited understanding, I thought that objects
> could be supplied to a Reyes-style renderer in an immediate-mode fashion.

> That is, I though that I wouldn't need an object until I need to render it,

> and won't need it again after I'm finished with it. Why is this not so? Is
> non-sequential access a hard or soft requirement of the Reyes renderer?

The original Reyes paper claimed that geometry need not be sorted in depth. This
turns out to be a really bad idea, because the assumption in the Reyes paper (low
depth complexity) is so seldom correct. The current implementation of prman processes
primitives in bucket order, and sorts from front to back. This allows you to not
shade primitives which aren't visible, which is a big savings, as shading is by far
the most complex part of rendering.

> ----- Motivation -----
>
> The 3D rendering world is divided into two widely-separated camps. One
> camp renders high-quality images (I believe the term "photorealistic" is
> misleading) at a rate of about 4k seconds per image. For these folks image
> quality is everything, and an image of a handful of Lambertian objects
> floating in space is considered trivial or uninteresting.

If those are the types of scenes you wish to render, the Reyes architecture
is probably overkill.

> The other camp is comprised of folks who insist on a maximum rendering
> time of about 33ms per image. For these folks, image quality is an obstacle
> on the way to faster rendering times. Talk to them about the huge sacrifice
> in quality they pay and they look at you like you're crazy (in my experience
> the high-quality camp readily acknowledges the limitations of their
> rendering speed, while the high-speed camp is often strikingly blind to the
> limitations of their rendering quality.)

I'll merely say that often hardware engineering types are rather staggered by
the amount of calculations we do during shading.

> Today, there's just not much that lies between these two worlds. I'm


> convinced that there is a whole lot of interesting work that can be done in

> the range of 125ms to 8s rendering times. Further, I believe that people
> who develop realtime renderers have algorithmic blinders on in the sense
> that their ultimate goal is often a hardware implementation -- a goal that
> obviates a large number of approaches on the way to a silicon pipe (compiled
> realtime shaders being one simple example.) In fact, I hope that before too
> long, an extensible rendering interface (ala RenderMan) can render images
> fast enough to be adequate for speed, and of a quality that will embarrass
> the 3D hardware camp. (Yes, I am aware of the "per-pixel" hardware shader
> work, but am unconvinced of its quality based on what I've seen. It's still
> an *extremely* limited sandbox, and I don't see the boundaries changing
> soon.)

The hardware is a moving target as well. Antialiasing hardware is beginning
to come about, which addresses some of the quality gap. I think predicting the
end to either hardware or software rendering is likely to be incorrect.

As an aside, Pixar is interested in rendering times in that regime not so much
to speed up final rendering, but to improve interactivity for rendering-intensive
jobs like lighting.

Mark

--
Mark T. VandeWettering Telescope Information (and more)
Email: <ma...@pixar.com> http://raytracer.org

Steve Hollasch

unread,
May 16, 2000, 3:00:00 AM5/16/00
to

I wrote: << I'm convinced that there is a whole lot of interesting work
that can be one in the range of 125ms to 8s rendering times. >>

"Bjorke" <bjo...@botzilla.com> replies:
<<Interesting, yes. Economically motivated?>>

Stephen Westin has already responded a bit to this excellent question.
While my personal motivation is not really economic, lucre has always been
the largest historical arbiter of technology.

This question is also very related to objections raised by Gritz and
VandeWettering, so I'll tackle them in this response also.

If you plot all renderers on a graph where the horizontal access denotes
speed and the vertical access denotes quality, you'll find that there's a
clump in the "very low speed, very high quality" area, and a clump in the
"very high speed, very low quality" area. What interests me is the dearth
of solutions in the middle -- why is the distribution so binary?

High-quality folks (Gritz and VandeWettering) respond that anything you
can do in that area is not worth the effort, or is trivial or uninteresting.
Specifically VandeWettering says such applications "... trace very specific


scenes, with very limited number of primitives, and often use tricks which

aren't generally applicable to more complex scenes." In short, the middle
area is uninteresting because its quality is inferior to production
renderers. Similarly, a realtime proponent would dismiss the middle area
because it's too slow, and would not compare to production realtime systems.
Bjorke objects because he doesn't see any money in it.

So let me go off a bit about why this area is interesting. As Westin
points out, both camps are typically geared to one particular area:
entertainment. If I'm going to a movie, I want "photorealistic" [sic]
quality, if I'm playing a twitch game, I want immediate feedback. There
are, however, a wide range of applications outside the entertainment sphere
that are just as valid and deserving of attention.

Example one: lots of sci viz folks use RenderMan to help them
understand complex datasets. Programmable shaders are a great way to expose
isosurfaces, or emphasize irregularities in the data. Right now the choice
is to have a very quickly rendered approximation (bordering on usefulness)
or a well-rendered scene that takes something on the order of an hour to get
a single frame. The first solution is often too crude to really plumb the
minutiae of a dataset, and the second solution is geared toward making
pretty pictures. What is often wanted is NOT the pictures themselves, but a
well-rendered assisted *exploration* of the data. Imagine how much more
powerful (and appropriate) a renderer would be that would allow the
scientist to craft a specific shader to illuminate a feature of the dataset,
at frame rates of 1Hz. Such a system would be far superior to either of the
two types of today's renderers.

As a second example, consider Adobe Acrobat. What is it, really? In my
view, Acrobat is nothing but an image-compression package, tailored to
images of printed text and diagrams. Similarly, there's a large market for
a system which effectively compresses synthetic images in the geometric
sense. That is, if you want to communicate an image of the American flag,
you're better off using PostScript than GIF or JPEG. A middle-ground
renderer able to "decompress" (that is, render) a synthetic image (at
arbitrary resolution, color depth, gamma, angle, et cetera) is much better
and more flexible than a static image expressed as a collection of (brittle)
pixels. If it takes two seconds for my web browser to display the image,
I'm not going to have a heart attack. Finally, CAD networks are really
beginning to take off. Last Fall, Ford helped launch a CAD network where
parts manufacturers were able to bid on specific parts that Ford needed.
The one month savings on the nascent system: $10 million. Believe me,
there's money out there.

Finally, before the "realtime Toy Story" quote goes much further, yes,
it's obviously the product of classic over-hyped marketing (and anticipating
future misquotes, I never said nor defended it.) My point is that rendering
quality does not have to be "Toy Story" to be legitimate, valid, and useful.


Steve Hollasch

unread,
May 16, 2000, 3:00:00 AM5/16/00
to

"Marc Olano" <ol...@helasco.engr.sgi.com> writes

<< The two biggest limitations with using multi-pass OpenGL for interactive
shading are the limited range and precision and the inability of most
current hardware to do texture lookups from computed results in the frame
buffer. >>

Yes, that's sort of the second part of my point. Dedicated hardware has
a lot of limitations that we really aren't aware of any more because we've
always got the OGL pipe in mind when designing our realtime rendering code.
I'm interested in thinking today about a rendering approach that will make
sense on general-purpose computing hardware four or five years out. What if
I was to design a moderately quick system that fundamentally had the full
toolbox of GP hardware available to it?

<< As for the scene graph question, all of the shading projects that I've
been a part of have retained data at one point or another. PixelFlow
converted the pseudo-immediate-mode stream into a retained internal
representation (though that had more to do with the 128x64 region size
handled by each node).

On the other hand, an explicit scene graph plays a big part in the stuff
here at SGI. Since we translate the shader into multiple OpenGL rendering
passes, we may end up drawing each object several times. A scene graph is
perfect for that task.
>>

To repeat myself, I absolutely understand the importance of a good scene
graph (or other retained structure) in the interest of performance (that's
basically been my job description for the last decade.) Having worked on a
number of cancelled scene graph projects, I know in my bones that a graphics
API that forces the use of a particular scene graph will die in the wild.
Application programmers vastly prefer doing their own graph, using their own
attribute management scheme, attribute override semantics, high-level
texture management scheme, constant subgraph folding, and so forth. I
believe that OGL is as popular as it is precisely *because* it's a straight
immediate-mode API. Yes, it's easy to write a slow OGL app, and a bit of
work to write a graph that feeds OGL a nicely state-sorted, culled,
primitive stream, but that's largely what interactive apps *want*.

Indeed, regardless of the fact that you work for SGI, I suspect that the
reason you layered your scene graph atop OGL is because of this flexibility.
If OGL had its own scene graph, would you have used it underneath your own
experimental scene graph? In short, immediate mode APIs are finnicky and
powerful and durable, retain-mode APIs are also powerful, but tend to die.


Steve Hollasch

unread,
May 16, 2000, 3:00:00 AM5/16/00
to
Hmmm, in re-reading what I wrote, I'm uncomfortable with the following:

<< Specifically VandeWettering says such applications "... trace very
specific
scenes, with very limited number of primitives, and often use tricks which
aren't generally applicable to more complex scenes." In short, the middle
area is uninteresting because its quality is inferior to production
renderers. >>

When Mark wrote this, he was responding to the quality of the realtime
raytracing demos, not to the area of middle-ground renderers specifically.
While I believe it's a valid summary of the voiced objections of
middle-ground renderers in general, the quote above is stretched beyond what
he intended. Sorry Mark.


Stephen H. Westin

unread,
May 16, 2000, 3:00:00 AM5/16/00
to
"Steve Hollasch" <stev...@microsoft.com> writes:

<snip>

> If you plot all renderers on a graph where the horizontal access denotes
> speed and the vertical access denotes quality, you'll find that there's a
> clump in the "very low speed, very high quality" area, and a clump in the
> "very high speed, very low quality" area. What interests me is the dearth
> of solutions in the middle -- why is the distribution so binary?

Well, perhaps it's possible to bridge the gap; prman with a simple
model and shaders can give render times of a minute or less, adequate
for interactive design. What I haven't seen much is slower versions of
real-time renderers; compromising speed but gaining in quality. I
suspect that is because doing this in a flexible way is just too
painful; lots of restrictive assumptions have gone into making it
fast, and those hamstring you when you try to make it better. Perhaps
more relevant, any solution is likely to be obsolete with the next
nifty bit o' hardware.

Bjorke

unread,
May 17, 2000, 3:00:00 AM5/17/00
to bjo...@squareusa.com

> Bjorke objects because he doesn't see any money in it.

heh heh... To be fair, I didn't object to anything, merely asked a
question. And believe me, *I* never seem to see any money in any of it
:/

There is actually quite a bit of rendering in the time ranges you speak
of going on out in the world, but usually the less-than-realtime delays
are in bandwidth issues and non-graphics calculation. For example, when
I change frames in Maya with a heavy character loaded, the delay may be
anywhere from a second to a minute before Maya actually starts feeding
polys into the render hardware. Which is, theoretically, still an as-
fast-as-electrically-possible situation.

Maybe the problem is that 125ms to 8s is within the range of impatience.
8s still seems like a dog-slow version of "realtime." After that
psychological watershed, people stop caring in the same way -- the
process engages the mind in a different, less-immediate, less-visceral
manner (hmm, sounds like I'm back to comparing it to the "live"
experience again, and I suppose I am). Immediacy is a big deal, whether
in an RPG or a spreadsheet. As a user, if I have to wait, I'll want a
very impressive result.

I don't think this is limited to entertainment -- entertainment simply
happens to be the business at both extremes of the curve. Aerospace used
to occupy both positions.

Sean C. Cunningham

unread,
May 28, 2000, 3:00:00 AM5/28/00
to
In article <8fpvsa$2tk$1...@sherman.pixar.com>, l...@pixar.com (Larry Gritz) wrote:

<snip>

> But in any case, Andrew is quite right. The RenderMan Interface was
> originally designed with hardware in mind, and at least two
> implementations have used it for real-time purposes (QRMan on the

> NeXT, and also rgl), albeit without programmable shading...

Sorry I'm a little late to this one. Men before Y is kicking my tail
right now and I'm only a couple short weeks to delivery. What about
CHAPREYES? Or would it qualify. Wasn't it that, or another similar
implementation on a Pixar, that was used to generate the dream sequence in
"Red's Dream"?

I may be wrong about the CHAPREYES part. I'm having fuzzy memories of CGW
articles from 1989 or 1990, back when Pixar was still making hardware and
doing research in volume visualization for medicine and such in addition
to making pretty pictures. They're still the kinds of articles I just
gloss over so my pattern recognition only picked out a few keywords.
Anyway, I seem to remember something special about that sequence. That it
was rendered in realtime or near realtime and that's one of the reasons
why the animators had to do extreme squash-n-stretch instead of motion
blur, etc.

Hmmm...I wonder if one of those Pixar imaging computers will ever show up
on e-bay.

--
Sean C. Cunningham .. poc...@d2.com
:........ poc...@whoopassentertainment.com :
Digital Domain .....................................
Venice, Ca http://www.whoopassentertainment.com

Bjorke

unread,
May 28, 2000, 3:00:00 AM5/28/00
to bjo...@squareusa.com
I may have been doing something wrong, but Chapreyes never rendered
anywhere near realtime on *my* PIC.

--
kb
LT Supv
Final Fantasy
http://www.finalfantasy.com/

Stephen H. Westin

unread,
May 30, 2000, 3:00:00 AM5/30/00
to
poc...@whoopassentertainment.com (Sean C. Cunningham) writes:

> In article <8fpvsa$2tk$1...@sherman.pixar.com>, l...@pixar.com (Larry Gritz) wrote:
>
> <snip>
>
> > But in any case, Andrew is quite right. The RenderMan Interface was
> > originally designed with hardware in mind, and at least two
> > implementations have used it for real-time purposes (QRMan on the
> > NeXT, and also rgl), albeit without programmable shading...
>
> Sorry I'm a little late to this one. Men before Y is kicking my tail
> right now and I'm only a couple short weeks to delivery. What about
> CHAPREYES? Or would it qualify. Wasn't it that, or another similar
> implementation on a Pixar, that was used to generate the dream sequence in
> "Red's Dream"?

Yes, ChapReyes was used to render the dream sequence. But that doesn't
mean real time; the Pixar Image Computer was basically a 10MIPS/40MOPS
16-bit integer SIMD machine; a 66MHz PowerMac would probably blow it
out of the water. At the time, it was hot stuff, but that was a long
time ago.

> I may be wrong about the CHAPREYES part. I'm having fuzzy memories of CGW
> articles from 1989 or 1990, back when Pixar was still making hardware and
> doing research in volume visualization for medicine and such in addition
> to making pretty pictures. They're still the kinds of articles I just
> gloss over so my pattern recognition only picked out a few keywords.
> Anyway, I seem to remember something special about that sequence. That it
> was rendered in realtime or near realtime and that's one of the reasons
> why the animators had to do extreme squash-n-stretch instead of motion
> blur, etc.

The lack of motion blur in ChapReyes rings a bell, but I doubt that
that was due to real-time capability. The Pixar Image Computer was
only conceived as a 2D image processor, and it's a tribute both to the
generality of its design and the ingenuity of the programmers that it
ever was coaxed into doing 3D rendering. I suspect that it was an
effort to sell more hardware and generate revenue to finance the
stillborn 3D device.

> Hmmm...I wonder if one of those Pixar imaging computers will ever show up
> on e-bay.

I think I know where two of them have yet to be thrown out; the owners
finally gave up on keeping the Sun host machines running. They are
still looking for a replacement display device with true 12 bits per
channel display.

0 new messages