Renderman Shaders

0 views
Skip to first unread message

Evelyn Normington

unread,
Aug 3, 2024, 5:33:46 PM8/3/24
to fakliponro

A shader written in RSL can be used without changes on any RenderMan-compliant renderer, such as Pixar's PhotoRealistic RenderMan, DNA Research's 3Delight, Sitexgraphics' Air or an open source solution such as Pixie or Aqsis.

Shaders express their work by reading and writing special variables such as Cs (surface color), N (normal at given point), and Ci (final surface color).The arguments to the shaders are global parameters that are attached to objects of the model (so one metal shader can be used for different metals and so on). Shaders have no return values, but functions can be defined which take arguments and return a value. For example, the following function computes vector length using the dot product operator ".":

In this document, shading includes the entire process of computing the colorof a point on a surface. The shading process requires the specification oflight sources, surface material properties, and volume or atmosphericeffects. The interpolation of color across a primitive, in the sense ofGouraud or Phong interpolation, is not considered part of the shading process.Each part of the shading process is controlled by giving a function thatmathematically describes that part of the shading process. Throughout thisdocument the term shader refers to a procedure that implements one of theseprocesses. There are thus three major types of shaders:

Conceptually, it is easiest to envision the shading process using raytracing (see Figure 1, below). In the classic recursiveray tracer, rays are cast from the eye through a point on the image plane.Each ray intersects a surface which causes new rays to be spawned and tracedrecursively. These rays are typically directed towards the light sources andin the directions of maximum reflection and transmittance. Whenever a raytravels through space, its color and intensity is modulated by the volumeshader attached to that region of space. If that region is inside a solidobject, the volume shader is the one associated with the interior of thatsolid; otherwise, the exterior shader of the spawning primitive is used.Whenever an incident ray intersects a surface, the surface shader attached tothat geometric primitive is invoked to control the spawning of new rays andto determine the color and intensity of the incoming or incident ray from thecolor and intensity of the outgoing rays and the material properties of thesurface. Finally, whenever a ray is cast to a light source, the light sourceshader associated with that light source is evaluated to determine the colorand intensity of the light emitted. The shader evaluation pipeline isillustrated in Figure 2.

This description of the shading process in terms of ray tracing is donebecause ray tracing provides a good metaphor for describing the optics ofimage formation and the properties of physical materials. However, the ShadingLanguage is designed to work with any rendering algorithm, including scanlineand z-buffer renderers, as well as radiosity programs.

Note that volume shaders do not share member variables with surface shaders.While it is possible to define a shader that contains both surface andvolume methods and use the same shader definition in RiSurfaceand RiAtmosphere calls, separate shader instances will result and eachinstance will have its own member variables. The atmosphere shader must usemessage passing (discussed below) to access the surface shader's public membervariables.

The prelighting method typically performs texture lookups and BRDFcalculations that are independent of the lights. The lighting method containsthe illuminance loops that call the lights, and the postlighting methodperforms any postprocessing that is necessary after the lights are executed.These methods could be leveraged in a future re-rendering implementation.After a light is interactively modified (e.g. changing its position orintensity), the lighting method can be called with only the modified light,re-calculating its contribution.

The lighting() method itself is still called, but only for REYES grids.The two new pipeline methods decouple view-independent shading fromview-dependent shading and thus permit the renderer to cache theview-independent portion. See the Physically Plausible Shading application note for moreinformation.

In addition to the standard pipeline stages, shader objects supportinitialization per-instance and per grid via the construct() andbegin() methods. The construct() method is limited in scope and doesnot permit access to varying data. It can be used to precompute uniform data,or perform other initializations which pertain to all invocations of a shaderinstance. The begin() method permits data to be initialized before theremainder of the pipeline runs.

Additionally, when the transmission hit mode indicates that opacityshould be calculated by running a shader, the renderer may cache theopacity for faster execution. When caching opacity for transmission rayhits, in the presence of an opacity method, the following methods are run:

Finally, when running shading on a ray hit, if the surface supportscaching of the view-independent shading via a diffuselighting()method, then that color will be present in Ci when the pipelinestarts and the following methods will be run:

When a shader is attached to a geometric primitive it inherits a set ofvarying variables that completely defines the environment in theneighborhood of the surface element being shaded. These state variables arepredefined and should not be declared in a Shading Language program. It is theresponsibility of the rendering program to properly initialize these variablesbefore a shader is executed.

In these tables the top section describes state variables that can be readby the shader. The bottom section describes the state variables that are theexpected results of the shader. By convention, capitalized variables refer topoints and colors, while lower-case variables are floats. If the firstcharacter of a variable's name is a C or O, the variable refers to a color oropacity, respectively. Colors and opacities are normally attached to lightrays; this is indicated by appending a lowercase subscript. A lowercase dprefixing a variable name indicates a derivative.

All predefined variables are considered to be read-only, with theexception of the result variables, which are read-write in the appropriateshader type, and Cs, Os, N, s and t, which are read-writein any shader in which they are readable. Vectors are not normalized bydefault.

The geometry is characterized by the surface position P which is afunction of the surface parameters (u,v). The rate of change ofsurface parameters are available as (du,dv). The parametricderivatives of position are also available as dPdu and dPdv. The actualchange in position between points on the surface is given byP(u+du)=P+dPdu*du and P(v+dv)=P+dPdv*dv. The calculated geometricnormal perpendicular to the tangent plane at P is Ng. The shading normal N isinitially set equal to Ng unless normals are explicitly provided with thegeometric primitive. The shading normal can be changed freely; the geometricnormal is automatically recalculated by the renderer when P changes, andcannot be changed by shaders. The texture coordinates are available as(s,t). Figure 3 shows a small surface element and its associated state.

The optical environment in the neighborhood of a surface is described bythe incident ray I and light rays L. The incoming rays come eitherdirectly from light sources or indirectly from other surfaces. The directionof each of these rays is given by L; this direction points from thesurface towards the source of the light. A surface shader computes theoutgoing light in the direction -I from all the incoming light. The colorand opacity of the outgoing ray is Ci and Oi. (Rays have an opacity sothat compositing can be done after shading. In a ray tracing environment,opacity is normally not computed.) If either Ci or Oi are not set,they default to black and opaque, respectively.

A light source shader is slightly different (see Figure 4:Light source shader state). It computes the amount of light cast alongthe direction L which arrives at some point in space Ps.The color of the light is Cl while the opacity is Ol.The geometric parameters described above (P, du, N, etc.) are availablein light source shaders; however, they are the parameters of the lightemitting surface (e.g., the surface of an area light source)--not theparameters of any primitive being illuminated. If the light source is a pointlight, P is the origin of the light source shader space and theother geometric parameters are zero. If either Cl or Olare not set, they default to black and opaque, respectively.

A volume shader is not associated with a surface, but rather attenuates aray color as it travels through space. As such, it does not have access to anygeometric surface parameters, but only to the light ray I and itsassociated values. The shader computes the new ray color at the ray originP-I. The length of I is the distance traveledthrough the volume from the origin of the ray to the point P.

The displacement shader environment is very similar to a surfaceshader, except that it only has access to the geometric surfaceparameters. It computes a new P and optionally a new N anddPdtime. In rendering implementations that do not support theDisplacement capability, modifications to P or dPdtime willnot actually move the surface (change the hidden surface eliminationcalculation); however, modifications to N will still occurcorrectly.

In the context of an imager shader, P is the position of the of the pixelcenter in current space as it is for all shaders. The other geometricvariables have their usual meanings. The variables u and vrun from 0 to 1 over the entire output image (over the ScreenWindow).

The imager shader environment also provides access to texture mappingvariables s,t, which are the texture mapping coordinates over theScreenWindow. These coordinates represent pixel centers, such that callsto texture() can map an appropriately prepared image over the entireoutput resolution. A raster coordinate may be obtained using this formula:(s*xres-0.5,t*yres-0.5).

c80f0f1006
Reply all
Reply to author
Forward
0 new messages