Writing new BRDF

439 views
Skip to first unread message

Antonio Neto

unread,
Apr 5, 2015, 12:50:33 PM4/5/15
to osl...@googlegroups.com
Hi guys,

I am aware that at the moment it's not possible to implement new BRDF funcion to the OSL core, which means that anyone who is writting an OSL shader will have to use things that are avaliable...
I am wondering if it's possible to write a built-in BRDF in the OSL shader, considering that it is possible to do a dot(N,-I) as well use other global var, functions and maths... So in theory I think it would be possible to write from scratch built-in BRDF right?

Best regards,
Antonio Neto.

Larry Gritz

unread,
Apr 7, 2015, 4:36:28 PM4/7/15
to osl...@googlegroups.com
It's not really possible to do so in any ordinary way, since OSL shaders are designed to return a closure which, once computed, can then be sampled by the renderer for a variety of incident and/or light directions, without having to re-run the shader at that point. You can't ask for the light direction (or incoming radiance) at all. And although it looks like you can use "I"... it's meant for certain special effects and once you start using it you aren't writing a proper "view-independent" shader any more.

So the surface shader is assembling a closure out of primitive components (like diffuse, microfacet, etc.) provided by the renderer, and of course doing any computations necessary to fully evaluate the weights and parameter values to those closures describing the material at that position (basically, everything except for I, L, and the incoming radiance).

The actual design of those primitive closure components is tricky -- not conceptually (you can see several in OSL's "testrender"), but in terms of the implementation on the renderer side. It just needs to know too much about the renderer internals to work correctly and efficiency and plug into the renderer's overall sampling and integration scheme. At SPI for our proprietary renderer, we've overhauled both the implementations as well as the internal API itself for those closure components, a shocking number of times in recent years, so I'm really glad we didn't specify that in the OSL standard itself. It has proven to be the least stable thing associated with OSL (I mean API stability, it works just fine). The good news is, by decoupling the sampling/integration from the shading, we've done these overhauls without ever needing to change any .osl shaders themselves. It just all works.

Those primitive components are supplied by the renderer, and it's certainly possible that a particular renderer may want to expose its internal APIs for closures and thus allow plug-ins or other extensions where you can write your own. That's up to the specific renderer. I wouldn't mind if the renderer authors all got together and, if they could agree on what a closure component (or BSDF) really "looks like" underneath, publish either a shared C++ API allowing BSDFs to be added, or come to agreement on how to express it in OSL itself. But so far, we haven't done so, and the differences between the renderers and the constant changes within even one renderer make me think we aren't quite there yet.

-- lg
--
Larry Gritz
l...@larrygritz.com



Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted

Antonio Neto

unread,
Apr 7, 2015, 5:18:42 PM4/7/15
to osl...@googlegroups.com
I see, that's frustating. If the renderer did not give us enough support for us we can't do much. And the osl propositing in my view falls into a limited thing. To me the whole purpouse of OSL is be as much open and consistent as possible to everyone, including solid renderers in the marketing like vray, arnold, renderman amoung others as well new renderers that are emerging within this new languague. I would really love to implement a shader the way that I want, by using brdf that I like or even builtin from scratch and rellay on the render to use it's radiance, rays, and anything that he has to calculate what I need based on his API... if you know what I mean. That's my thoughts about the OSL proposal.

Best regards,
Antonio Neto.
Message has been deleted

Larry Gritz

unread,
Apr 7, 2015, 5:36:56 PM4/7/15
to osl...@googlegroups.com
I would also like a common (that is, cross-renderer) way to describe BSDFs themselves. My first choice would be to do so in OSL itself, though I'd settle (especially as a first stab) for a C++ plugin API.

What can I say? Talk to the vendor of your OSL-based renderer of choice. If this is something the users want, maybe everybody will see fit to agree on the framework for an extensible implementation. I don't think there is anybody opposed to this in principle, it's just that nobody has stepped up to do the work and make a concrete proposal.

-- lg


On Apr 7, 2015, at 2:17 PM, Antonio Neto <neto...@gmail.com> wrote:

> I see, that's frustating. If the renderer did not give us enough support for us we can't do much. And the osl propositing in my view falls into a limited thing. To me the whole purpouse of OSL is be as much open and consistent as possible to everyone, including solid renderers in the marketing like vray, arnold, renderman amoung others as well new renderers that are emerging within this new languague. I would really love to implement a shader the way that I want, by using brdf that I like or even builtin from scratch and rellay on the render to use it's radiance, rays, and anything that he has to calculate what I need based on his API... if you know what I mean. That's my thoughts about the OSL proposal.
>

Antonio Neto

unread,
Apr 8, 2015, 12:14:09 PM4/8/15
to osl...@googlegroups.com
Right right, would be really nice to bringing the main renderer vendors out there to come with you guys to establish a common nomenclature, structure and process for the renderers API in the OSL side, so we could write OSL shaders that would work in any render that establish structure implemented in their render API. You know what I mean? Of course that could be very trick and hard... I am probably not the right person to say what of that is possible to do and what's not, but that's the idea.
With that set, we would have a much broader possibilities and I believe that anyone will be free to create new BRDF, and explore complex materials properties as he wish as shaders writers could do with c++ and the render SDK in hand...
Would that make sense?

Best regards,
Antonio Neto.

Alex Conty

unread,
Apr 9, 2015, 12:51:43 PM4/9/15
to osl...@googlegroups.com
I just don't think it is realistic to go for a common BSDF API that doesn't pull a lot of tricky renderer details into it. It is already hard to have cross renderer shaders just because we have different approaches to which closures to use. For BSDFs we also get ray types, differentials, path depending sampling hacks, wether we sample everything together or split lobes, and of course efficiency. A shader is called once per shading point, which may be a few hundred times per pixel, a BSDF gets called a lot more often, so you want a streamlined API for your renderer.

We have had complete overhauls just because it is slightly more convenient to return some value as a quotient and whatnot. But if anybody comes up with some clever idea I'm all ears.

alex

Antonio Neto

unread,
Apr 9, 2015, 6:41:55 PM4/9/15
to osl...@googlegroups.com
Hey guys I was reading again what Larry said about view-dependent, and I notice that in my first comment I mentioned dot(N,-I) but it was a mistake I thought initial that I was the Light Direction, and it actually the view direction... So I've been looking into the documentation and couldn't find the parameter related to the Light Direction, so I could compute dot(N,L), vector H = normalize(I+L) and do all the operations necessary to create new BRDFs... Is it possible to get the light direction somehow? What am I missing?

Best regards,
Antonio Neto.

Alex Conty

unread,
Apr 10, 2015, 6:17:31 AM4/10/15
to osl...@googlegroups.com
There is no light direction at shading time. That's one of the big differences between OSL and RSL. In the OSL world, shading is about defining the surface properties regardless of incoming light or viewing direction. We still provide the view vector (and ray differentials) to enable optimizations, but the shade call is not in a light loop. So no L vector, sorry.

That would be part of a future BSDF API, but not in the shading function.

alex

Antonio Neto

unread,
Apr 10, 2015, 7:32:10 AM4/10/15
to osl...@googlegroups.com
Thank you guys for all the explanations.
I feel that without know the Light Direction it's limited the amount of things we could do.
I keep looking at the documentation pdf seeking for some hack approach to this... I was thinking if this vector L = normalize( Ps - P); could help somehow, or use a transform to convert from one space to another if necessary, I am not sure with both of them are in the "common" space that would be where the light calculation should be accordingly to what I read in the documentation.
Also I don't know how the OSL could do the energy conservation, if I do let's say for example a Ci = diffuse(N)+microfacet("beckmann", Nshading, Roughness, eta, 0); would my shader preserve the energy from the specular to the diffuse? I feel they will go above 1.0 if I could not do any closure color diffuseEC = diffuse(N) * ( 1.0 - microfacet("beckmann", Nshading, Roughness, eta, 0) ), I know what I mean?
Anyways looking forward to help the community to grow up.

Best regards,
Antonio Neto.

Dan Kripac

unread,
Apr 10, 2015, 10:47:41 AM4/10/15
to osl...@googlegroups.com
Hey Antonio,

OSL is quite different when coming from a renderman shader language background or from writing C++ shader plugins that use lighting loops for many renderers.

Basically the power of OSL comes from it being very clever and efficient at optimising down potentially very complex texturing graphs due to having an overview of an entire connected shader graph topology.

Whenever you multiply colours with closures in your OSL shader (i.e what you may have normally have done in a light loop in RSL) what happens is that your OSL renderer plug-in ends up with a list of closures Ids and argument data and the colour that you multiplied with in your shader.

The renderer plug-in can then perform a fast lighting integration loop using C++ bsdf functions without needing to re-evaluate the texturing graph vastly simplifying the code executed for many light samples.

This de-coupled texturing and bsdf evaluation is advantageous for performance and lighting integration techniques in modern raytracing renderers that implement Physically Based Rendering as described in this book.

Not being able to define bsdfs at runtime in OSL is definitely a limitation of OSL but I have a feeling that this will be addressed sometime in the future.

Hope this helps!
Dan

--
You received this message because you are subscribed to the Google Groups "OSL Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osl-dev+u...@googlegroups.com.
To post to this group, send email to osl...@googlegroups.com.
Visit this group at http://groups.google.com/group/osl-dev.
For more options, visit https://groups.google.com/d/optout.

Larry Gritz

unread,
Apr 10, 2015, 11:59:13 AM4/10/15
to osl...@googlegroups.com
On Apr 10, 2015, at 4:32 AM, Antonio Neto <neto...@gmail.com> wrote:

Thank you guys for all the explanations.
I feel that without know the Light Direction it's limited the amount of things we could do.
I keep looking at the documentation pdf seeking for some hack approach to this... I was thinking if this vector L = normalize( Ps - P); could help somehow, or use a transform to convert from one space to another if necessary, I am not sure with both of them are in the "common" space that would be where the light calculation should be accordingly to what I read in the documentation.

I think you're still one step from fully understanding the situation. It's not just that we don't *expose* L (if that was the case, maybe you could find a clever way to get at the same value). The real issue is that the shader runs before the renderer has chosen a light position. And furthermore, the closure that the shader computes for that position on the surface might be reused for many lights, light positions, and view directions.


Also I don't know how the OSL could do the energy conservation, if I do let's say for example a Ci = diffuse(N)+microfacet("beckmann", Nshading, Roughness, eta, 0); would my shader preserve the energy from the specular to the diffuse? I feel they will go above 1.0 if I could not do any closure color diffuseEC = diffuse(N) * ( 1.0 - microfacet("beckmann", Nshading, Roughness, eta, 0) ), I know what I mean?

This is complicated. Your formula is a little weird, so let's rewrite it a bit:

    Ci = Kd * diffuse(N) + Ks * microfacet("beckmann", N, ...);

(experts: let's sweep all the details I know you're thinking of under the rug for a moment)

OK, so in a language like RSL, diffuse() and microfacet() are functions that returns concrete color values, which you then scale and add and assign to Ci, which is also a color. The renderer gets back Ci, numeric values for RGB, representing the outgoing radiance in a particular direction. It's a number like (0.25, 0.89, 0.21). If you want to know the outgoing radiance in a different direction, or from a different set of light positions, you have to rerun the shader again, and get another Ci value, such as (0.37, 0.42, 0.74).

So, as you suspect, there is nothing, NOTHING preventing the careless shader writer (or dialer of shader parameters) from setting Kd and Ks to sum to more than 1.0.

But that's not OSL!

In OSL, closures are not colors, they are NOT collections of numbers. A closure is a sum of weighted function pointers and numeric arguments for the functions. So maybe a pictorial representation of OSL's Ci might be:

     weight      function    arg1(N)        arg2(roughness)
     ------------------------------------------------------
     .4,.4,.4    "diffuse"   (0.6,0.7,0.9)  -
     .52,.49,.1  "beckmann"  (0.6,0.7,0.9)  0.01

The underlying implementations of diffuse and beckmann have additional parameters, incoming and outgoing directions (you can think of them like I and L) that are not supplied by the shaders. The renderer will supply those for each eventual call of the underlying diffuse & beckman, as it chooses samples and directions. So at some point, an I and L will be chosen, and there will be a call to

    outgoing_radiance += weight * function.eval (I, L, arg1, arg2, ...)

But that's long after the shader has completed. Also, there's more that the renderer can do with the function than evaluate it. How about this:

    light_direction = function.sample (I, random1, random2, arg1, arg2, ...)

It can *sample* the functions (supplied with uniform random values, the "sample" method is expected to pick the right probability distribution function to make good samples in directions that are "important"). This is really, really tricky in a language like RSL because there is a a chicken-and-egg problem: the BSDF evaluation functions need the light directions already chosen, but the renderer doesn't know which are important directions to sample without knowing the BSDF. Uh oh. The closure approach fixes all of this.

Anyway, in this scenario, it is trivial for the renderer to "fix" any situation where the shader has grossly over-weighted the BSDF contributions. Each BSDF primitive function will integrate to 1.0. So the renderer just sums the weights of the components, and if they exceed 1.0, it then divides all the weights by their sum, keeping them in the same balance but ensuring that they never sum to more than 1.0.

Now, that's not perfect. It ensures that integrated over ALL angles, it's energy conserving. But for a particular angle, various combinations of BSDFs (even if the sum of their weights <= 1.0) could be non-conserving. This is an ongoing problem (though less severe than Kd+Ks >1), and it's the reason that we have *pairs* of BSDFS that are meant to sum properly and mutually conserve energy -- for example, a render might have a dielectric_reflection and a dielectric_refraction that, if given the same roughness and IOR, are guaranteed to match in a way that conserves energy. Even that is tricky, so the foolproof way is to make a single dielectric_everything that handles it all for you and leaves no room for the shader writer or lookdev TD to dial things to evade the rules of energy conservation. A lot of people are starting to gravitate to this approach, where a typical shader group uses just one closure that does it all (perhaps choosing among several, but never combining except for sets that are purposely designed to produce the right results when coupled).

I hope this gives a bit more concrete idea of what's going on. It's a big conceptual shift, and while at first it seems like it's taking functionality away from the shader, what it's really doing is moving the division of labor between the shader and renderer to a more sane place, where there is a clean line between describing the material, and describing the rendering algorithm (including the sampling and integration), and this turns out to have HUGE benefits.




Anyways looking forward to help the community to grow up.

Best regards,
Antonio Neto.

On Friday, April 10, 2015 at 7:17:31 AM UTC-3, Alex Conty wrote:
There is no light direction at shading time. That's one of the big differences between OSL and RSL. In the OSL world, shading is about defining the surface properties regardless of incoming light or viewing direction. We still provide the view vector (and ray differentials) to enable optimizations, but the shade call is not in a light loop. So no L vector, sorry.

That would be part of a future BSDF API, but not in the shading function.

alex

--
You received this message because you are subscribed to the Google Groups "OSL Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osl-dev+u...@googlegroups.com.
To post to this group, send email to osl...@googlegroups.com.
Visit this group at http://groups.google.com/group/osl-dev.
For more options, visit https://groups.google.com/d/optout.

--
Larry Gritz
l...@larrygritz.com



Antonio Neto

unread,
Apr 10, 2015, 3:01:42 PM4/10/15
to osl...@googlegroups.com
Very very interesting, Larry, thanks once more!

I was not so aware of some stuffs under the hood, and definitely it's a mind shift in terms of concept. I will keep reading as I digest all the concepts involved in the process. So stuff I already got it, like the couple of BRDFs that match etc.. Also while I am exploring the closures I made a render test using the ward closure with trace_reflections true, and it was damm slow. I don't think the standard shader with the Ward BRDF selected from the render that I am using to test OSL stuff(in this case Vray) is that slow...

Best,
Antonio Neto.
Reply all
Reply to author
Forward
0 new messages