FEATURE REQUEST: "shader" type and ability to "call" shaders

345 views
Skip to first unread message

Master Zap

unread,
Jul 1, 2018, 2:07:20 AM7/1/18
to OSL Developers
Lots of renderers and shading systems allow this. 

I know OSL wants to be all pure and fancy and the shader gaph be a pure DAG - and I can see the theoretical purity in that - but artists keep asking me for these things.
And I don't think it's impossible, really. It can be done in a well-defined way.

What am I talking about? About Treating attached shader inputs effectively as a "subroutine", to be able to "call" it with overridden globals, maybe even multiple times.

This is super useful in so many instances.

Take for example my Randomized Bitmap shader... which places random things in random places all over an object... right now it has to be limited to pure textures (images) because there would be no way for the Randomized Bitmap shader to modify the UV lookup of an attached input.

I would suggest it to work something like this:

#1: A new data type called "shader"
#2: A syntax to "call" this shader with overridden globals or parameters.

So like this:

shader ShittyBlur (
    int BlurSamples = 8,
    float BlurWidth   = 0.1,
    shader ShaderToBlur,
    output color Blurry = 0.0
)
{
    for (int i = 0; i < BlurSamples; i++)
    {
        vector rnd = (noise("hash", P, i) - 0.5) * BlurWidth;
        Out += ShaderToBlur.call("u", u + rnd[0], "v", v + rnd[1]);
    }
    Out /= BlurSamples;
}

The above shader would multi-sample the attached input shader and call it again and again with slightly different u and v coordinates, and averaging the result.

This is just one of 1000 examples of how this could work nicely.

Arnold uses this for many things, like the Bump2d shaders way of calling the input shader multiple times to compute the gradient, or the "Toon" shaders way of calling the inputs on the "tonemap" inputs to apply gradiends to shaded results.
Legacy 3ds Max shaders use this a lot to do things like camera mapping, or in other ways modify the UV lookup of connected shaders, just like in this example.

I don't think this is rocket surgery to implement either on the OSL side of things, and would be a spiffy addition to OSL 2.0 IMHO


Thoughts, everyone?


/Z

Master Zap

unread,
Jul 1, 2018, 2:09:47 AM7/1/18
to OSL Developers
Another example that came up today when I was making a fake raymarching laser beam, was that any amospheric shading in my fake volume have to be "in" the shader. It would be so much nicer if I could just connect any shade tree to my "density" input and have the raymarching shader just sample that at points it wanted....

(Attaching a glowing cube for entertainment) :)

/Z
glowing-cube.png

Changsoo Eun

unread,
Jul 2, 2018, 6:18:43 PM7/2/18
to OSL Developers
EXACTLY!

This is what I was trying to say in my previous attempt..
Seeing this request coming from Zap make me very very very very happy.

Master Zap

unread,
Jul 3, 2018, 8:06:03 AM7/3/18
to OSL Developers
Actually we don't need a new type, just a new function, evaluate(), which just takes a regular input value, but evaluates it in a context with overriden globals/inputs.

shader ShittyBlur (
    int BlurSamples = 8,
    float BlurWidth   = 0.1,
    color ColorToBlur = 0.0,
    output color Blurry = 0.0
)
{
    for (int i = 0; i < BlurSamples; i++)
    {
        vector rnd = (noise("hash", P, i) - 0.5) * BlurWidth;
        Out += evaluate(ColorToBlur, "u", u + rnd[0], "v", v + rnd[1]);
    }
    Out /= BlurSamples;
}

Olivier Paquet

unread,
Jul 3, 2018, 10:27:28 AM7/3/18
to OSL Developers
Le dimanche 1 juillet 2018 02:07:20 UTC-4, Master Zap a écrit :
I know OSL wants to be all pure and fancy and the shader gaph be a pure DAG - and I can see the theoretical purity in that - but artists keep asking me for these things.
And I don't think it's impossible, really. It can be done in a well-defined way.

What am I talking about? About Treating attached shader inputs effectively as a "subroutine", to be able to "call" it with overridden globals, maybe even multiple times.

It's not just "theoretical purity". This kind of pattern:

- mess with global variables
- run piece of code which uses global variables
- mess again with global variables
- run piece of code which uses global variables

goes against the last 30-40 years of software engineering wisdom. I get that it looks like a quick and painless short-term solution to a problem. But it feels as wrong as the guy walking off from the group to explore a dark corridor alone in the Nth Alien sequel. Just because we're a bit behind times in the CG world does not mean we should keep doing things the wrong way.

Besides, u and v are surface parametric coordinates, not texture coordinates. So in our renderer at least it would be useless to override them as far as moving textures goes. Worse, it would make dPdu, dPdv incoherent. And screw up attribute lookups which need u,v to evaluate the attribute. etc.

I get that there's a need for some nicer mechanism to do things like texture projections but I don't feel like this should be it.

Olivier

Master Zap

unread,
Jul 4, 2018, 12:53:50 AM7/4/18
to OSL Developers
On u / v ... you took a little too much out of a simplified example....

Besides, the model isn't "mess with global variables" in the uncontrolled way you are insinuating. 
Rather, it is a well-defined, cleanly scoped, temporary modification of state (parameters or "globals") while re-running code with a well defined set of parameters.
(Besides, it's not my fault OSL is working with a concept of "magical globals" - one of the parts of the language I dislike the most) :)

Think about it as using shader connections as pluggable subroutines, if you will. 

This is extremely useful in an insane number of cases. The "ShittyBlur" example was just the 1st that popped up in my head and was an easy way to describe what I mean conceptually in a simple example. Don't take it too literally...

/Z

Zap Andersson

unread,
Jul 4, 2018, 1:14:28 AM7/4/18
to osl...@googlegroups.com
Let me take a more real-world example where this is useful.

In 3ds Max, there is a shader called Randomized Bitmap, which scatters a set of images across an object randomly.

Here's a (bad) video of it in action:


It takes up to 10 images, and randomly splatters them across a surface, alfa-blending them on top of each other, looking the images up with different rotations, scaling, whatnot.

(In practice it is using a set of grid cells, using cellnoise for each of the cells to compute randomized modificatons for the bitmaps within that cell, including moving actually out of the cell into neighbouring cells)

There is also a way to drive the probability of if a bitmap should show up or not in its cell, driven by another cellnoise from that cell.

Here are the problems with this:


PROBLEM #1: This only lets me scatter bitmaps (images) around. That's very limiting. 

What if I wanted to scatter a procedurally generated thing around, or even scatter an image with a color correction operation on it? That's impossible with OSL. 

The only thing the user can supply the shader for the shader to "look up" into is an image file. What I want, is the ability to supply anything procedurally driven, and have that "looked up" by the shader equally.

Actually creating the same visual effects with pure DAG shaders might be possible with some insane spaghetti setup, but you would have to break out texture-coordinate generating shaders separately, and wire them all into some kind of switchers that picks which subshader to actuall drive and.... since N number of lookups may happen at the same sample... I'm not even sure it is possible, and that any mere human would be able to wrap their head around the insane spaghetti that would be needed to do something so "simple".


PROBLEM #2: Subtle problem of probability

So each bitmap can show up with a certain probability; basically I compute a float cellnoise for each bitmap, and compare its float output to a set probability threshold. If below, paint it, if above, don't.

This works great until the user attemts to texture the probability too. He puts some undulating noise function into the "probabilty" input, and wonder why his bitmaps are getting cut off.

Well, they are getting cut off because at the point where the noise undulates above the probability threshold, the bitmap will just stop being painted, and the point of transition of the noise function, totally disregarding you were halfway through the bitmap and have now cut off half of it.

What really needs to happen, is that any randomization for the probability of bitmap N needs to be looked up with a noise function driven by cell of bitmap N, so that the same "randomized" probability value is computed for the entirty of the bitmap.

The only way I could make that work now, is to build "noise probability" into the Randomized Bitmap shader itself. Which bloats it to way too many parameters, and that noise will still not satisfy the users who wants the distribution of Graffiti on a wall to follow some certain distribution....



IF WE HAD HAD my proposed functionality this would have been trivial:

Rather than my code having to have texture lookups, it could just have had ten regular color inputs.

The inputs would then be looked up with something like

    color blah = evaluate(input, "UVW", myUVcoord);

Assume all texturing shader by convention have an "UVW" input, connecting any shaders to the input with an unconnected "UVW" input would have the value I send to the evaluate function fed in. Done!

Same would work for the probabilty, I would just do

   float probability = evaluate(probability, "UVW", cellPosition);



I could think of infinitly more examples.



/Z




Olivier Paquet

unread,
Jul 4, 2018, 4:15:37 PM7/4/18
to OSL Developers
Le mercredi 4 juillet 2018 00:53:50 UTC-4, Master Zap a écrit :
On u / v ... you took a little too much out of a simplified example....

Besides, the model isn't "mess with global variables" in the uncontrolled way you are insinuating. 
Rather, it is a well-defined, cleanly scoped, temporary modification of state (parameters or "globals") while re-running code with a well defined set of parameters.

It's still global parameters. And the whole thing would be limited to globals which is not great if you eventually want to drive some other parameter of the network in a similar way (from the downstream shader).
 
(Besides, it's not my fault OSL is working with a concept of "magical globals" - one of the parts of the language I dislike the most) :)
 
There was a thread a few days ago which discussed trying to phase them out. Which is another reason to find a better solution to this problem.

This is extremely useful in an insane number of cases. The "ShittyBlur" example was just the 1st that popped up in my head and was an easy way to describe what I mean conceptually in a simple example. Don't take it too literally...
 
I'm not saying it's not useful. It is. It's also a dangerous tool which can be abused to write horribly inefficient or unmaintainable shaders. But that's beside the point. What I am trying to get to is that we should try to come up with a way to do the same thing without being limited to globals. It would certaintly be useful to manipulate other values besides "u,v" in some cases. It would also be nice for that to be explicit in how the shaders are connected (would probably make the optimizer's job easier). Or perhaps it should be a way to call a completely separate shading network, overriding any set of inputs we want to. I don't know quite what it should look like but it's worth trying to see beyond the immediate need. If we really can't come up with anything better then fine. But I still have a bad feeling about it coming back to bite us at some point.

Olivier

Zap Andersson

unread,
Jul 4, 2018, 4:59:45 PM7/4/18
to osl...@googlegroups.com
I never said it would be limited to globals... I want to be able to override anything the "evaluated" shader references.

/Z

--
You received this message because you are subscribed to the Google Groups "OSL Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osl-dev+unsubscribe@googlegroups.com.
To post to this group, send email to osl...@googlegroups.com.
Visit this group at https://groups.google.com/group/osl-dev.
For more options, visit https://groups.google.com/d/optout.



--
--
Håkan "Zap" Andersson - http://twitter.com/MasterZap - the man, the myth, the concept.
--

Larry Gritz

unread,
Jul 4, 2018, 9:12:31 PM7/4/18
to osl...@googlegroups.com
Sorry for the delay. Sometimes when people ask about things that are sufficiently deep, it takes me a couple days to gather my thoughts and respond. (Limiting factor: best ideas seem to come in the shower, there are only so many showers I can take daily.)

I hear you guys and understand what you want: a way to take a shader node graph and call it like a subroutine, potentially multiple times with differing parameters.

I can see that there are some neato things you could do with this. The question is how to do it in a clean way that doesn't totally bork all the optimizations we rely on or encourage awful shaders that are hard to understand.

Let's sweep the "modify globals" ugliness out of the way by assuming that this would be done *after* a previously-discussed migration away from "global variables" and toward just using what looks syntactically like shader parameters (which in some cases may be understood to bind to things in the shaderglobals or outputs).

I'm not a fan of "every time you grab a parameter, it magically re-evaluates the upstream network", because it's just a recipe for confusion and wasted computation. I won't even go into the details, but suffice it to say that I can rattle off edge cases where optimizations we depend on would be ruined.

But Zap's idea of some kind of explicit evaluate(paramname, ...) has merit and is possibly growing on me. It's a sharp tool that can easily be used to hurt yourself, but at least it can never truly surprise you -- things can only re-execute if you call it, and it's very visible when you do this. It's still fraught with danger, details that we'd need to ponder, and some limitations we'd want to impose. As examples:

* Presumably it would invalidate the results *all the way up the chain* from the parameter being pulled?

* You would probably only be allowed to set new values of parameters (of the upstream subnet) that were previously marked as `lockgeom=0`, or else they might have been constant-folded away. (Unless, ick, you assume that any subnets that are potentially named by evaluate() would have wholesale drastically less optimization done to them.)

* If results from that multiply-executed subnet are used elsewhere in the network (I mean, one of its outputs is connected to some other input elsewhere, besides the things getting an evaluate() call), then the fact that the evaluation *order* of nodes is nondeterministic means that you won't know which of the output values (from the potentially several times it was called) will end up copied to other places. So maybe it's only safe/predictable to do this if the subnet in question ONLY connects to the node that is doing the evaluate() call. It probably also follows that nodes in a subnet implicated by an evaluate() call would not be able to participate in the "identical node deduplication" optimization that we currently do.


Now, as an aside, I just realized that a lot of your fantasy feature may be partially fulfilled with the "osl.imageio". Are you familiar with that?  Look in src/osl.imageio/oslinput.cpp, the comments explain the gist, but the short version of the story is that it's a DSO/DLL for OIIO that dynamically recomputes image pixels (including texture) by executing OSL code. So you would access it in your shader as a texture, literally texture("...", xcoord, ycoord), and it would be running OSL code behind the scenes. The only parameters that "vary" are the 2D texture lookup coordinates, but you can set other parameters on the subnet by embedding them in the "filename" with a REST-like syntax. And further, since it goes through the texture system, it is able to antialias/filter itself and responds to the usual texture controls (including blur!).

I think that for a straightforward "texture bombing" or "triplanar mapping", you could use your usual bomb/triplanar texture and just use a specially crafted texture filename to trigger OSL code to run to make a procedural pattern that you would be bombing. Maybe that at least partially scratches the itch?

-- lg


To unsubscribe from this group and stop receiving emails from it, send an email to osl-dev+u...@googlegroups.com.

To post to this group, send email to osl...@googlegroups.com.
Visit this group at https://groups.google.com/group/osl-dev.
For more options, visit https://groups.google.com/d/optout.

--
Larry Gritz




Master Zap

unread,
Jul 5, 2018, 1:20:23 AM7/5/18
to OSL Developers

I hear you guys and understand what you want: a way to take a shader node graph and call it like a subroutine, potentially multiple times with differing parameters.


In a nutshell, yes. 

 
 
I'm not a fan of "every time you grab a parameter, it magically re-evaluates the upstream network", because it's just a recipe for confusion and wasted computation. I won't even go into the details, but suffice it to say that I can rattle off edge cases where optimizations we depend on would be ruined.


Right, and I'm not sure I ever asked for that. Any "regular" request for an output value would work exactly the way it does today, and would be 100% unaffected by this additoinal feature.

 
But Zap's idea of some kind of explicit evaluate(paramname, ...) has merit and is possibly growing on me. It's a sharp tool that can easily be used to hurt yourself,

It's no worse than message passing, which, ick, should never ever have been in there. Ever. :)
 
but at least it can never truly surprise you -- things can only re-execute if you call it, and it's very visible when you do this. It's still fraught with danger, details that we'd need to ponder, and some limitations we'd want to impose. As examples:

* Presumably it would invalidate the results *all the way up the chain* from the parameter being pulled?

No, not really... only downstream from the variable in the chain you modified.... (or, in the case of those icky globals, anything reading that global)

In a sense, an "eavulate" call of a graph would build a separate version of that optimized graph under the hood, optimized differently for that particular "evaluate" call. 

You modify "P" to re-execute some 3d procedural? 

Only the nodes in that shading graph referencing P needs to be re-run. If you modify an input called "UVW", only input(s) by that name (that are not connected to upstream nodes) would have their values modified, and only code downstream from that point (those poitns) even have to re-run.
 

* You would probably only be allowed to set new values of parameters (of the upstream subnet) that were previously marked as `lockgeom=0`, or else they might have been constant-folded away. (Unless, ick, you assume that any subnets that are potentially named by evaluate() would have wholesale drastically less optimization done to them.)

Ah, but the optimization of the re-execution is an independent problem to the optimization of the graph when used "normally". The optimizer would need to think of this as it's own thing, effectively building a parallell "clone" of the graph under different optization constraints.

But these constraints are well defined; we know exactly what is being modified, we can tell by the parameter list of "evaluate" even....!

 
* If results from that multiply-executed subnet are used elsewhere in the network (I mean, one of its outputs is connected to some other input elsewhere, besides the things getting an evaluate() call), then the fact that the evaluation *order* of nodes is nondeterministic means that you won't know which of the output values (from the potentially several times it was called) will end up copied to other places. So maybe it's only safe/predictable to do this if the subnet in question ONLY connects to the node that is doing the evaluate() call. It probably also follows that nodes in a subnet implicated by an evaluate() call would not be able to participate in the "identical node deduplication" optimization that we currently do.

No no, this feature cannot affect regular evaluation in any way.... it has to be sideffect free. Any caching of numbers and values have to be done separately for the "regular" call of the graph vs. each "evaluate" call of the graph.
 
Now, as an aside, I just realized that a lot of your fantasy feature may be partially fulfilled with the "osl.imageio". Are you familiar with that?  

I am, and it may be a neat toy, but remember that my constraint is making the same OSL code run everywhere.... which is already today a big problem with imageio being configured differently on different OSL-capable renderers..... 

(Of course, building new features into OSL would cause the same problem, I admit, and herding the cats of the renderers to align on an OSL version, well... :) )

The osl.imagio trick is fun, but limited to 2d use cases effectively in the unit square...

...and while that might solve the trivial case of a "bitmap+colorcorrect" being an input to Randomized Bitmap, it won't solve ANY of the more interesting cases, like plugging in a 3d noise field into a fake raymarching shader to fake volumetric effects, or even the quite real world (for users) problem of mapping the "probability" parameter of Randomized Bitmap to something under the users control (as described in my previous message).



/Z

P.S.  I even poindered building a "shading graph to bitmap" shader into 3ds Max itself, which would effectively 2d-bake any shade tree (OSL or not) and appear to the downstream side of the shade tree as if it is a regular bitmap. Similar to the osl.imagio "trick" but done wholly on the app side. 

Master Zap

unread,
Jul 5, 2018, 1:51:31 AM7/5/18
to OSL Developers
Lets make an even real-worlderer example. Here's my up-to-4-dimensional mandelbrot shader

// A simple Mandelbrot set generator shader

// mandelbrot.osl by Zap Andersson

// Modified: 2018-02-08

// Copyright 2018 Autodesk Inc, All rights reserved. This file is licensed under Apache 2.0 license

// https://github.com/ADN-DevTech/3dsMax-OSL-Shaders/blob/master/LICENSE.txt


shader Mandelbrot

    [[ string help = "A four dimensional mandelbrot/julia set generator" ]]

(

    vector UVW = vector(u,v,0)

        [[ string help = "The coordinate to look up. Defaults to the standard UV channel" ]],

    vector Center = 0,

    float Scale = 0.35,

    float ZImaginary = 0.0,

    int Iterations = 100,

    float ColorScale = 1.0,

    float ColorPower = 1.0,

    output color Col = 0,

    output float Fac = 0.0,

)

{

    vector pnt = (UVW - point(0.5,0.5,0)) / Scale - (Center + point(0,0.66,0));

    float cR = pnt[0];

    float cI = pnt[1];

    float zR = pnt[2];

    float zI = ZImaginary / Scale;

    int num = 0;

    for (num = 0; num < Iterations; num++)

    {

        float zR2 = zR * zR; // Real squared

        float zI2 = zI * zI; // Imag. squared

        if (zR2+zI2 > 4.0)

            break; // Escapes to infinity

        zI = 2 * zR * zI + cR;

        zR = zR2 - zI2 + cI;

    }


    Fac = (float)(num * ColorScale)/ (float)Iterations;

    Col = wavelength_color(420 + pow(Fac, ColorPower) * 2000);

}



I was able to render this volumetrically into this fancy movie:

https://www.youtube.com/watch?v=dNX4yhW3CJ0


But that thing was tediously rendered in Arnold w. actual volumetric shading. 


I realized that I could probably fake it 1000 times faster by making my own hacky raymarcher.


But since we are lacking evaluate function, the only way to do it would be to literally hand-rewrite the shader like this:




void mandelbrot

(

    vector UVW,

    vector Center,

    float Scale,

    float ZImaginary,

    int Iterations,

    float ColorScale,

    float ColorPower,

    output color Col ,

    output float Fac,

)

{

    vector pnt = (UVW - point(0.5,0.5,0)) / Scale - (Center + point(0,0.66,0));

    float cR = pnt[0];

    float cI = pnt[1];

    float zR = pnt[2];

    float zI = ZImaginary / Scale;

    int num = 0;

    for (num = 0; num < Iterations; num++)

    {

        float zR2 = zR * zR; // Real squared

        float zI2 = zI * zI; // Imag. squared

        if (zR2+zI2 > 4.0)

            break; // Escapes to infinity

        zI = 2 * zR * zI + cR;

        zR = zR2 - zI2 + cI;

    }


    Fac = (float)(num * ColorScale)/ (float)Iterations;

    Col = wavelength_color(420 + pow(Fac, ColorPower) * 2000);

}



shader Mandelbrot

    [[ string help = "A four dimensional mandelbrot/julia set generator" ]]

(

    float start = 0.0,

    float end = 100.0,

    int steps = 10,

    vector UVW = vector(u,v,0)

        [[ string help = "The coordinate to look up. Defaults to the standard UV channel" ]],

    vector Center = 0,

    float Scale = 0.35,

    float ZImaginary = 0.0,

    int Iterations = 100,

    float ColorScale = 1.0,

    float ColorPower = 1.0,

    output color Col = 0,

    output float Fac = 0.0,

)

{

    float fac = 0.0;

    color col = 0.0;

    float delta = (end - start) / steps;

    for (int i = 0; i < steps; i++)

    {

        point pt = P + I * (start + delta * (i + noise("uperlin", P*10000)));

        float z = pt[2];

        pt[2] = 0.0;

        mandelbrot(pt, Center, Scale, z, Iterations, ColorScale, ColorPower, col, fac);

        Fac += fac;

        Col += col;

    }

    Col /= steps;

    Fac /= steps;

}



I had to rewrite the "shader" Mandelbrot to the "sub-function" mandelbrot, and then call this N times from my main shader instead, effectivly forcing me to build my 3d texture INTO my ray marcher....  That's silly, I shouldn't have had to do that!


Had there been an "evaluate" function, I would just have plugged my regular Mandelbrot into my raymarcher and, as they say, Bob would have been my Fathers Brother.


It would have *worked* exactly the same. The final optimized backend shading code would probably be identical to this case... but it would have been much more useful and flexible to the user, and much easier on the shader developer....



/Z



Olivier Paquet

unread,
Jul 5, 2018, 2:18:39 PM7/5/18
to OSL Developers
Le jeudi 5 juillet 2018 01:51:31 UTC-4, Master Zap a écrit :

I realized that I could probably fake it 1000 times faster by making my own hacky raymarcher.


But since we are lacking evaluate function, the only way to do it would be to literally hand-rewrite the shader like this:


Shader authors writing mini-renderers in their shaders is one of the ways RSL shaders got really awful. I consider it a feature of OSL that it generally prevents this :-)

Olivier

Master Zap

unread,
Jul 5, 2018, 2:30:15 PM7/5/18
to OSL Developers
Uhm.. you are arguing my point for me :)

/Z

Paolo Berto

unread,
Jul 5, 2018, 6:09:54 PM7/5/18
to osl...@googlegroups.com
+1

--
You received this message because you are subscribed to the Google Groups "OSL Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osl-dev+u...@googlegroups.com.
To post to this group, send email to osl...@googlegroups.com.
Visit this group at https://groups.google.com/group/osl-dev.
For more options, visit https://groups.google.com/d/optout.


--
paolo berto durante
j cube inc. tokyo, japan
http://j-cube.jp

Moritz Mœller (The Ritz)

unread,
Jul 5, 2018, 7:50:39 PM7/5/18
to osl...@googlegroups.com, Olivier Paquet
Indeed. +1.
Don't get me wrong, I'm seeing lots of good reasons to add some evaluate() like call to OSL.
But I agree that writing a ray marcher inside a shader is not one of them. ;)

Cheers,

.mm

Master Zap

unread,
Jul 6, 2018, 5:10:40 PM7/6/18
to OSL Developers
OMG.... stop taking my illustrative samples as the point... they are not the point.... the feature is the point... I'm just trying to come up with a quick case of illustrating the point !!!

/Z

Larry Gritz

unread,
Jul 6, 2018, 5:24:36 PM7/6/18
to osl...@googlegroups.com
Point is made. There are many cool applications to this feature. It's a good suggestion, now we just need to figure out the best way to express it in the language. Language changes are forever, so we don't rush the implementation of new ones before we've really had a while to mull it over.

-- lg


--
You received this message because you are subscribed to the Google Groups "OSL Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osl-dev+u...@googlegroups.com.
To post to this group, send email to osl...@googlegroups.com.
Visit this group at https://groups.google.com/group/osl-dev.
For more options, visit https://groups.google.com/d/optout.

--
Larry Gritz




Changsoo Eun

unread,
Jul 7, 2018, 4:20:06 AM7/7/18
to OSL Developers
I really like where this conversation is going as an "artist".

On Saturday, June 30, 2018 at 11:07:20 PM UTC-7, Master Zap wrote:

Changsoo Eun

unread,
Mar 27, 2019, 4:50:15 AM3/27/19
to OSL Developers
Any update?


On Saturday, June 30, 2018 at 11:07:20 PM UTC-7, Master Zap wrote:

Larry Gritz

unread,
Mar 27, 2019, 10:56:31 PM3/27/19
to osl...@googlegroups.com
It's on the list somewhere to dig into this, I like the idea, but we haven't made any progress on it yet, sorry.

-- lg


--
You received this message because you are subscribed to the Google Groups "OSL Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osl-dev+u...@googlegroups.com.
To post to this group, send email to osl...@googlegroups.com.
Visit this group at https://groups.google.com/group/osl-dev.
For more options, visit https://groups.google.com/d/optout.

--
Larry Gritz




Changsoo Eun

unread,
Mar 28, 2019, 2:50:44 AM3/28/19
to OSL Developers
Thanks for answer!
I think having this would make OSL a lot more artist friendly.
To unsubscribe from this group and stop receiving emails from it, send an email to osl...@googlegroups.com.

To post to this group, send email to osl...@googlegroups.com.
Visit this group at https://groups.google.com/group/osl-dev.
For more options, visit https://groups.google.com/d/optout.

--
Larry Gritz




Moritz Moeller

unread,
Mar 28, 2019, 10:04:21 AM3/28/19
to osl...@googlegroups.com, Larry Gritz
On 27.3.19 23:56, Larry Gritz wrote:
> It's on the list somewhere to dig into this, I like the idea, but we
> haven't made any progress on it yet, sorry.
>
>> On Mar 27, 2019, at 1:50 AM, Changsoo Eun <chang...@gmail.com
>> <mailto:chang...@gmail.com>> wrote:
>>
>> Any update?
>>
>> On Saturday, June 30, 2018 at 11:07:20 PM UTC-7, Master Zap wrote:
>> What am I talking about? About Treating attached shader inputs
>> effectively as a "subroutine", to be able to "call" it with
>> overridden globals, maybe even multiple times.

+1
This was one of the features that made DarkTree (or make -- you can
actually still use the software on Win) so powerful.
The way it was modeled was just as another input to the node that
ultimately called the incoming connection. So for an artist it was quiet
transparent what was going on.
I think there was no way to connect parameters though. Obviously, in
OSL, this would be possible and make this an even more powerful feature.


.mm

Mitch Prater

unread,
Apr 10, 2019, 5:47:46 PM4/10/19
to OSL Developers
Hello All - I'm just checking back with this group and found this thread to extremely topical.

As RenderMan moves towards a fully OSL-based system of pattern generation and away from plugins, we need some means of replicating the functionality provided by RIS mutable shading contexts and ray probing of geometry within an OSL pattern generation network. We also have plugins that do general system file I/O (artist readable/editable pattern specification).

The evaluate() function would seem to be a solution to the mutable shading context; depending on its implementation.

Casting rays to probe scene geometry for pattern generation is another capability we use often.

System file I/O is another requirement. I recently wrote an extremely complicated plugin to generate woven cloth that required the weave patterns to be specified in an artist-editable text file. The only reason it has to be a plugin (now that PRMan OSL has derivative functions) is to read the weave file. Writing it in OSL would have been so much easier!


mitch

Changsoo Eun

unread,
Oct 29, 2019, 4:54:20 PM10/29/19
to OSL Developers
Bumping.... any news on this front?
From "artist" perspective, this is one of the biggest obstacles to move to OSL.



On Saturday, June 30, 2018 at 11:07:20 PM UTC-7, Master Zap wrote:

Philippe LePrince

unread,
Oct 30, 2019, 11:30:28 AM10/30/19
to OSL Developers
This feature request feels like an ugly can of worms. If "artist-friendly" actually means "allow artist to shoot him/herself in the foot" then, sure, by all means.

Mitch, one reason we switch to OSL is to get rid of mutable shading contexts. They are horrible, slow and create a lot of unnecessary complication.

Cheers

Master Zap

unread,
Oct 30, 2019, 12:02:15 PM10/30/19
to OSL Developers
I disagree. 

First of all, in this propsal the "mutability" is quite well-defined, since you explicitly specify what is overridden for each "evaluation".

Secondly, a ton of things are just soo much easier with this approach.

A much better example than my blur example would be something like a tri-planar texturing shader. 

Today it's easy to write a tri-planar shader that accepts image textures. But what if you want to tri-planar project something procedural (even something simple as an image texture through a color correction node) ... that's super hard now.

Today, if you want to make a triplanar feature that supports procedural things, you need to make two shaders, one that outputs the three image planes texture spaces. Then any texturing node/network, you have to clone three times. Then, you need another node that mixes the three projections based on the normal. While this works, it's insane spaghetti and very un-intuitive.

With the proposed "eval" approach, you can plug any 2d shading shade graph into the input of a triplanar texture node, and the triplanar would handle creating the three different planes, set the u and v coordinates accordingly and call the input in three different ways.

The actual executing code in the end would be pretty much equivalent to the classic approach, but the usability for the user is 10,000 times better.

/Z

Changsoo Eun

unread,
Oct 30, 2019, 1:52:36 PM10/30/19
to OSL Developers
Try to explain artist why you cannot transform your OSL tree just like Nuke.

Or.. try to explain why UVtransform must come "before" the shader. Not after.

In the end, it is the "artist" who use this tool to make image.
IF anything can make artist like easier and simpler, that's whay developer should do even though that means more work for dev.

Mitch Prater

unread,
Oct 30, 2019, 3:26:15 PM10/30/19
to OSL Developers
Wow - those are some pretty negative adjectives Philippe :)

The fact is that I've used (the equivalent of) mutable contexts and ray tracing in pattern generation for many many years. These are powerful tools whose results cannot easily be replicated by other means - particularly if one's effects pipeline tends to be very shader-centric.

No one wants to create slow shaders, and I doubt anyone in this group would use this functionality indiscriminately. These capabilities have existed, and therefore have been used, for some time. If your stance is being prompted by the migration to rendering architectures that are more limited, that is not itself a sufficient reason to remove these capabilities altogether, IMHO, if there is some way they can be provided.

mitch

Larry Gritz

unread,
Oct 31, 2019, 1:22:16 AM10/31/19
to osl...@googlegroups.com
I understand the request, and I think it has merit. Though it's definitely a sharp tool to use with caution and I wouldn't blame anybody for eschewing the feature in their studio (it's likely to have expense and unintuitive behaviors that pop up in ways that are not apparent to nonexpert users). I also estimate a 10% chance that after going farther down the line of implementing, we could find a technical reason why the whole thing can't work well, for example by just too badly thwarting important optimizations that everybody relies on.

The fact is that I just haven't had the time to try implementing it yet. It's on my list, I just have finite resources. It's not that I'm simply ghosting the idea because I don't like it.

-- lg


--
You received this message because you are subscribed to the Google Groups "OSL Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osl-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/osl-dev/498ced5d-4cac-4cb0-ba3a-67c09676d763%40googlegroups.com.

--
Larry Gritz




Mitch Prater

unread,
Oct 31, 2019, 8:55:05 AM10/31/19
to OSL Developers
Thanks Larry. I'm certainly not demanding that this be done right away - just advocating that the idea not be abandoned without a proper investigation. I know you'll get to it when you can!

mitch
To unsubscribe from this group and stop receiving emails from it, send an email to osl...@googlegroups.com.

--
Larry Gritz




Changsoo Eun

unread,
Oct 31, 2019, 5:45:36 PM10/31/19
to OSL Developers
I just want make sure the sole reason why I'm keep asking is..

The current behavior is totally unintuitive for nonexpert users.
When "artist" think of node based shading system, they just think it is suppose to act like Nuke.

For many, they simply would not even start to use OSL because of this.
Then, optimization doesn't matter since they don't use it.


On Wednesday, October 30, 2019 at 10:22:16 PM UTC-7, Larry Gritz wrote:
To unsubscribe from this group and stop receiving emails from it, send an email to osl...@googlegroups.com.

--
Larry Gritz




Paolo Berto

unread,
Oct 31, 2019, 10:56:27 PM10/31/19
to osl...@googlegroups.com
I agree with Philippe (incredible!)

But I cannot avoid tease on that sentence: what do you mean "we
switched to OSL"?
Up until 22.x only patterns are supported. You are saying that in 23
you can finally(!) use OSL closures? And does that send in pension the
RixShadingPlugin?

If so, you guys _really_ like the long road to good decisions, let's
revisit history:

RSL --> RSL2 --> ISPC --> RixShading --> Added OSL (for patterns
only) --> Full OSL (and maybe still retain Rix for double work?)

Amazing.

One thing for sure is: RenderMan shading writers will never be out of
work with all these rewriting, though arthritis might be kicking in
for those who went through the whole journey :)
> --
> You received this message because you are subscribed to the Google Groups "OSL Developers" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to osl-dev+u...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/osl-dev/8f0039ca-5b85-4da5-99e0-fb028ba8b6b9%40googlegroups.com.



--
Paolo Berto Durante
J CUBE Inc. Yokohama, Japan
http://j-cube.jp

Jonathan Gibbs

unread,
Nov 1, 2019, 11:05:21 AM11/1/19
to osl...@googlegroups.com


> On Oct 30, 2019, at 9:02 AM, Master Zap <zap.an...@gmail.com> wrote:
>
> A much better example than my blur example would be something like a tri-planar texturing shader.

This is a very good example. (Another one is doing texture bombing with not just images, but any procedural input.)

If you want triplanar projections to be easy to use, you have two choices:

1. If OSL can be extended in the way Zap asks, you can continue to make direct editing of the OSL shader graph a primary interface.
2. Otherwise, you end up in a situation where that graph is very unfriendly so you have to present an abstraction on top of it as the interface, then translate that down into OSL, adding a lot of complexity.

I think if done cleanly, what it changed is known and can be tracked by the renderer, so it’s not in the position of just having to assume shaders can change anything.

—jono

Aghiles Kheffache

unread,
Nov 1, 2019, 1:09:23 PM11/1/19
to OSL Developers
Hello,
Today it's easy to write a tri-planar shader that accepts image textures. But what if you want to tri-planar project something procedural (even something simple as an image texture through a color correction node) ... that's super hard now.

Today, if you want to make a triplanar feature that supports procedural things, you need to make two shaders, one that outputs the three image planes texture spaces. Then any texturing node/network, you have to clone three times. Then, you need another node that mixes the three projections based on the normal. While this works, it's insane spaghetti and very un-intuitive.

We are actually solving this same problem right now. Of course, we don't show such speghetti to the user ! The user creates a normal network. A small post-process in 3Delight re-arranges networks. We add some meta-data to the shader to indicate such patterns to 3Delight.

* The Tri-Planar is indeed a good demonstration of the "limits" of networked shaders. But these usability limits do not have to be shown to the end user. (Note that the projection maya node has the same problem as Tri-Planar).
* For such cases please consider a special type  of connections that are evaluated when needed instead of having co-shaders. So you can run "evaluate" on a isconnected() shader parameters later inside your shader (presumably after modifying some variables). You still have your network, its clearly defined and would solve some of the issues I believe.
* co-shaders quickly makes things ugly. We have massive experience with them, unfortunately.  People will abuse it, guaranteed. I am still surprised that people still believe in 2019 that such features will be used wisely and "only when needed". 

Guys, don't do it.

-- Aghiles

Master Zap

unread,
Nov 1, 2019, 10:16:56 PM11/1/19
to OSL Developers
Sweet Jeebus NOOOO!!!

We tried this exact approach for our internal Autodesk renderer ART. It was a complete and utter disaster of maintainability in every conceivable way. 

The complete and utter failure of the very approach you propose is exactly WHY I am requesting this feature in OSL; Letting the language do it is the only viable, clean way to do it.

No behind the scenes spaghetti juggling. That's just madness!!!


/Z

Master Zap

unread,
Nov 1, 2019, 10:17:54 PM11/1/19
to OSL Developers
Just the very fact you call it "a normal network" shows just how wrong the approach is....

/Z

Master Zap

unread,
Nov 1, 2019, 10:25:46 PM11/1/19
to OSL Developers
(Sorry for the rapid fire posts - I wish google groups had an Edit button):

What I'm asking for is not "co shaders". Co-shaders are ugly things. This is different, and much much more well-defined.

The approach I propose doesn't have side effect, and is completely analyze-able from a data-flow perspective. 

It shouldn't need to trip up any renderer in any way (what gets produced for the LLVM to actually execute in the end is literally the same as the "juggled spaghetti", but done at the language level, with no error-prone behind-the-scenes poorly defined graph-juggling needed: just a "please do this thing again except this time this parameter has this value instead".)


/Z

P.S. When (or "if", but I hope for "when") adding this, I advice to strongly consider deprecating the message passing feature of OSL at the same time... !! 

Aghiles Kheffache

unread,
Nov 1, 2019, 10:28:46 PM11/1/19
to OSL Developers

On Friday, November 1, 2019 at 10:16:56 PM UTC-4, Master Zap wrote:
Sweet Jeebus NOOOO!!!

We tried this exact approach for our internal Autodesk renderer ART. It was a complete and utter disaster of maintainability in every conceivable way.  

And you are not exagerating just a little bit ? :) In every conceivable way  ?

I am wrapping up the 3Delight for Houdini plug-in. After finshing the C4D plug-in. And the Katana plug-in just before, and the Maya one just before that. This code in 3Delight is about 40 lines of "unmaintainable code" and worked extremely well and reliably on all these plug-ins. But again, NSI was designed to be simple.

The complete and utter failure of the very approach you propose is exactly WHY I am requesting this feature in OSL; Letting the language do it is the only viable, clean way to do it.

Implementing a parallel and totally orthogonal evaluation mechanism in OSL seems as clean as that toilet from Trainspotting. :)
 
No behind the scenes spaghetti juggling. That's just madness!!! 

So put the spaghtettin juggling in user's face instead !

-- Aghiles

 

Zap Andersson

unread,
Nov 1, 2019, 10:34:40 PM11/1/19
to osl...@googlegroups.com
Moi? Hyperbole!?! NEVER :)

/Z

"I told you a hundred million times - don't exaggerate! " - me

--
You received this message because you are subscribed to the Google Groups "OSL Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osl-dev+u...@googlegroups.com.

Aghiles Kheffache

unread,
Nov 1, 2019, 10:35:58 PM11/1/19
to OSL Developers
It seems I didn't understand your proposal well enough. I will get back to this tomorrow with a fresh look.

Larry Gritz

unread,
Nov 2, 2019, 12:39:45 AM11/2/19
to noreply-spamdigest via OSL Developers
Everybody stay calm. Fully general co-shaders are not on the table.

We're just talking about a way to explicitly "re-pull" a parameter in a way that triggers another execution of any upstream nodes that are connected to it (possibly with some changed parameters). It would be a very, very narrow communication and dependency channel, just enough to let you do things like the triplanar or texture bombing in a less awkward way.

Paraphrasing from Zap's proposed syntax, he's thinking something like

    result = evaluate (param, "name1", value1, ...)

Basically what this would be equivalent to

    result = param;   // must be a parameter to the current shader

EXCEPT...

* Any upstream connections contributing to param would be executed, not use a cached value from already having been run.

* For those upstream shaders that get re-evaluated in the process, any parameters matching the optional name/value pairs (such as "name1" above) will get that value for the execution (such a parameter must be marked as lockgeom=0, much like if you expected it to be interpolated across the surface from vertex variables, so that it's not optimized away).

Did I get that right?


-- 
You received this message because you are subscribed to the Google Groups "OSL Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osl-dev+u...@googlegroups.com.

Master Zap

unread,
Nov 2, 2019, 4:38:25 AM11/2/19
to OSL Developers
I think you got that exactly right. 

The only unclear thing (even to me!) is exactly what one "should" be able to modify.

The classic "globals" are easy, because they are the same across the entire accessed subgraph. And in my use cases, 99.9% I probably wanna modify u, v or P only.

But parameters? What if I say "diffuse_color" ... but there is a whole shade tree upstream with five different "diffuse_colors"... which does it apply to? All of them that are not already connected to? (Probably!)
But details like that need to be ironed out.

Actually, if this feature would be useful even if it was only the classic globals that could be modified....

/Z

Master Zap

unread,
Nov 2, 2019, 4:43:40 AM11/2/19
to OSL Developers
I would love to see your 60-ish lines that solves this problem by re-jiggling the nodes, and how it would support the following example (back to example 1 :) )

shader ShittyBlur (
    color ColorToBlur = 0.0,
    int   BlurSamples = 32,
    float BlurWidth   = 0.1,
    output color Blurry = 0.0
)
{
    for (int i = 0; i < BlurSamples; i++)
    {
        vector rnd = (noise("hash", P, i) - 0.5) * BlurWidth;
        Out += evaluate(ColorToBlur, "u", u + rnd[0], "v", v + rnd[1]);
    }
    Out /= BlurSamples;
}

/Z

Etienne Sandré-Chardonnal

unread,
Nov 2, 2019, 5:59:07 AM11/2/19
to osl...@googlegroups.com
Hi,

Having a variable being re-evaluated without explicitely calling a function is super awkward. The language would partially become behavioral (such as VHDL or Verilog) while still being a procedural language.

Why not having a special UV modifier shader type, that does not manage modifying globals by itself, but just returns a modified UV, letting OSL core managing the dirty work internally? This way would not expose writable uvs in surface shaders, and would not require any shader rewrite.

Étienne
--
You received this message because you are subscribed to the Google Groups "OSL Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osl-dev+u...@googlegroups.com.

Paolo Berto

unread,
Nov 2, 2019, 6:03:55 AM11/2/19
to osl...@googlegroups.com
> Out /= BlurSamples;

If I will see this in a osl shader I will immediately look elsewhere.


P


--
Sent from a parallel universe

Larry Gritz

unread,
Nov 2, 2019, 10:55:32 AM11/2/19
to osl...@googlegroups.com
Well, there's also a faction who largely think that "globals" should go away one of these days, so I don't think restricting it to globals is the right solution, either. They may shift to being explicitly declared as parameters at some point.

But yes -- a feature like this would have a LOT of sharp edges. Once you pull on a subgraph to be re-evaluated, you should presume it to be expensive, and possibly ambiguous. If there is a whole shade tree upstream with five different "diffuse_colors"... you're probably dug yourself quite a hole. If you know enough to disambiguate them (maybe using "layername:paramname" like we do elsewhere), you are presuming to know a lot of details about the preceding subnet, which certainly makes you wonder whether all the logic should be in one node after all.



-- 
You received this message because you are subscribed to the Google Groups "OSL Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osl-dev+u...@googlegroups.com.

Olivier Paquet

unread,
Nov 4, 2019, 9:47:14 AM11/4/19
to OSL Developers
Le samedi 2 novembre 2019 04:38:25 UTC-4, Master Zap a écrit :
The classic "globals" are easy, because they are the same across the entire
accessed subgraph. And in my use cases, 99.9% I probably wanna modify *u*,
*v* or *P* only.

And you just made my point that they're not. u,v are surface parametric coordinates, not texture coordinates. Some systems (eg. Maya) can have multiple sets of texture coordinates so speaking of "the uvs" makes no sense there. It's simple when you think only of a specific use case and shovel the implementation details to someone else. We know that well because we used to maintain the whole stack, from the application plugin, through the renderer and shading language, right down to the shading language JIT. There was no shoveling possible then so we have some decent amount of experience with picking the least annoying place to implement a feature.

The only positive I see about doing this in OSL is that it can be done once and be compatible for everyone using OSL, if done well. That's a major point which can certainly justify some extra complexity. But we still need to be careful to not to create the tool that will lead OSL down the path of RSL. We also know from experience that TDs will end up using every possible feature to shoot themselves in the foot. Every single one, without fail. "people will not abuse this" is simply not true over the long term. And they also always blame us when "the shader runs slowly". So for us, OSL's lack of some features is a major feature.

Olivier

Philippe LePrince

unread,
Nov 4, 2019, 10:11:19 AM11/4/19
to osl...@googlegroups.com, Philippe Leprince

On 4 Nov 2019, at 15:47, Olivier Paquet <olivier...@gmail.com> wrote:

So for us, OSL's lack of some features is a major feature.

+100

And I would add that Nuke doesn’t work like OSL at all and this idea that they should work the same way to be more artist-friendly blows my mind. I have worked with and for artists for 25+ years and most of them are hard working, clever people, always eager to learn to get better at their craft. So please, let’s drop this idea that artists are incapable to learn or adapt and that everything should be dumbed down.

Philippe Leprince
RenderMan Field Engineer
Pixar Animation Studios

Changsoo Eun

unread,
Nov 4, 2019, 11:28:39 AM11/4/19
to OSL Developers
It is not matter of "incapable to learn or adapt".
For them, it becomes "why do I need to bother with this?"

Yes, you may be able to force artists in big studios to only use OSL.
But, for the users who use any DCC with their renderer, they have zero reason to bother with this. They could ask each renderer's dev to make the shader which works as they expected.

Then, you lost the biggest potential advantage of OSL, compatibility.

This is the first chance that we can get actually working universal pattern tree. But, if artist is not using it, all these talk is meaningless.



On Monday, November 4, 2019 at 7:11:19 AM UTC-8, Philippe LePrince wrote:

Master Zap

unread,
Nov 4, 2019, 11:49:15 AM11/4/19
to OSL Developers
Some 2d shader has some uv input, in all my 2d texturing shaders the default value for this input is vector(u,v,0) meaning, if it's not connected to anything, it will get the u and v as the coordinate. Which, yes, in 3ds max is mapped to the default texture space (as does everyone else running OSL anywhere I'v seen, I know the spec calls them "parametric coordinates" but nobody uses them for that).

Anyhoo, obviously we have arbitrary number of texture spaces. If you want to use anything other than the default texture space, you connect a shader that does the approriate getattribute call to get that texture space, easy! But if you *don't* connect anything, you get whatever u and v is - e.g. the default texture space. Super Easy and user friendly.

So yes, there is a concept of "the UV's", which is completely well-defined and meaningful for something like this.  

As a user, I have this 2d texturing thing, which is set to use the default coordinate space. I plug that into my triplanar projection node, and it handles computing the texturing coordinates, evaluating the incoming texture at those coordinates, and mixing the results appropriately.

I really don't understand in any way the resistance of being able to have such a useful feature.

Making triplanar textures without this feature is an excercise in mind-numbing pain. Here's an example:

tri-spaghetti.png

The shader you want to use has to exist thrice. To keep settings for those shaders in sync you need a settings node that feeds the parameters to the 3 copies of the shader. Horrible. And you need a separate three-plane coordinate generator and a separate three-plane mixing shader. It's just crazy.


If we had this feature, the previous mess would look like this:

tri-nospaghetti.png


What you wanna triplane goes into "Input". Done.

How one can dislike this is beyond me, honestly.

/Z


Philippe LePrince

unread,
Nov 4, 2019, 12:04:05 PM11/4/19
to osl...@googlegroups.com, Philippe Leprince
Making triplanar textures without this feature is an excercise in mind-numbing pain

LOL. Hyperbole again ? ;-)

Seriously, the first feature request I got for triplanar was “how can I map a different texture per axis”. How would you maintain the required level of  elegance in that case ?

Philippe


On 4 Nov 2019, at 17:49, Master Zap <zap.an...@gmail.com> wrote:

Some 2d shader has some uv input, in all my 2d texturing shaders the default value for this input is vector(u,v,0) meaning, if it's not connected to anything, it will get the u and v as the coordinate. Which, yes, in 3ds max is mapped to the default texture space (as does everyone else running OSL anywhere I'v seen, I know the spec calls them "parametric coordinates" but nobody uses them for that).

Anyhoo, obviously we have arbitrary number of texture spaces. If you want to use anything other than the default texture space, you connect a shader that does the approriate getattribute call to get that texture space, easy! But if you *don't* connect anything, you get whatever u and v is - e.g. the default texture space. Super Easy and user friendly.

So yes, there is a concept of "the UV's", which is completely well-defined and meaningful for something like this.  

As a user, I have this 2d texturing thing, which is set to use the default coordinate space. I plug that into my triplanar projection node, and it handles computing the texturing coordinates, evaluating the incoming texture at those coordinates, and mixing the results appropriately.

I really don't understand in any way the resistance of being able to have such a useful feature.

Making triplanar textures without this feature is an excercise in mind-numbing pain. Here's an example:

<tri-spaghetti.png>

The shader you want to use has to exist thrice. To keep settings for those shaders in sync you need a settings node that feeds the parameters to the 3 copies of the shader. Horrible. And you need a separate three-plane coordinate generator and a separate three-plane mixing shader. It's just crazy.


If we had this feature, the previous mess would look like this:

<tri-nospaghetti.png>


What you wanna triplane goes into "Input". Done.

How one can dislike this is beyond me, honestly.

/Z


-- 
You received this message because you are subscribed to the Google Groups "OSL Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osl-dev+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/osl-dev/70092d7e-109a-4a19-b784-85553bf56bbc%40googlegroups.com.
<tri-spaghetti.png><tri-nospaghetti.png>

Master Zap

unread,
Nov 4, 2019, 12:40:31 PM11/4/19
to OSL Developers
Three inputs, one for XY, one for YZ and one for ZX, and if YZ or ZX are not connected they use the XY input?

/Z

Moritz Moeller

unread,
Nov 4, 2019, 5:37:47 PM11/4/19
to osl...@googlegroups.com
I'm just a lurker on this list and I don't even use OSL since I turned
my back on VFX for since almost a decade.

But as an ex hardcore RMan/shader guy I dare to be opinionated here. :P

TL;DR
What Zap is asking does have downsides.
But it is a /very useful/ feature.


Long version:

On 30.10.19 16:30, Philippe LePrince wrote:
> This feature request feels like an ugly can of worms. If
> "artist-friendly" actually means "allow artist to shoot him/herself in
> the foot" then, sure, by all means.

That depends how broad it is. No one here is asking for co-shaders
unless I miss something.


I urge people who have access to a Windows box or VM to get their hands
on an old copy of DarkTree. A viual shader creation tool.
Best download the demo version from the website and play with it.

DT implemented this feature in a very straightforward way. And not just
for distorting the space of a node.

I believe to recall the Shader Design and Advanced Tweaks tutorials on
their website make use of this feature but I may be wrong
(http://www.darksim.com/tutorial/index.html).

Most of the people who are weighting in against it here do /not/ seem to
be creative shader writers and/or artists types creating shaders.
Maybe I'm wrong but that's my impression as the counter arguments do
appear to center around some very simple uses cases.

Consider that while you do understand how dangerous such a feature is
you may at the same time miss how useful it is. :]

If you are a shader writer you will do whatever gets you the
pattern/look you want.

For example call some complex function you came up with that somehow
depends on the octave number inside a fractal noise's for() loop.

Or multiply two different noise types in each octave.
This is not the same as multiplying two fractal noises with a different
noise type each.
Aka: not all use cases for this feature a space/texture coordinate related.

How do you do this the above, as an artist, if all you have is OSL nodes
in some UI in some contemporary DCC app?

Two ways:

1. You have access to a DCC app that can create a 'flat' OSL shader from
source snippets that are represented as nodes.
That's how the old DCC plugins for RMan compliant renderers mostly
worked. No need to extend OSL then.

2. The language supports calling pre-compiled blocks/functions in some
way. Aka: what Zap is asking for.



For 1.: Everyone who writes plug-ins for DCC apps that somehow link to
OSL supporting renderers could possibly add this.
Likely some aid/support from the vendor of each DCC app is needed.
It's back to how stuff worked in the old RSL days.
MtoR anyone?

But even if you did this -- unless all authors of such plugins somehow
agreed on using the same 'flat OSL from source snippets system' (which
is unlikely), what Changsoo wrote would likely apply:

On 4.11.19 17:28, Changsoo Eun wrote:
> Then, you lost the biggest potential advantage of OSL, compatibility.


For 2.: add support for what Zap is asking for.


I do agree with the naysayers that TDs will use this to shoot themselves
in the foot somehow. But TDs always do. :]

<rant>
If you want to make a renderer/shading system that is TD safe make one
that can only do 1x1 pixel images with 1 bit depth. :P

Likely TDs will still find a way to create a 2x2 pixel image made out of
gray pixels somehow with this system. After all, that's part of their job.
</rant>

On the other hand you could give artists an OSL feature that does what
they need and that they can use inside the DCC app they learned already.


I would suggest to make a list of use cases people would want to have
covered by such a feature.

Then this thread can revolve around those features rather than on who's
right about whether OSL needs it at all or not. At least for a while. ;)



Beers,

.mm

Changsoo Eun

unread,
Nov 4, 2019, 8:05:27 PM11/4/19
to OSL Developers
I'm really impressed by this post.
You read my mind. 
Thanks. If we ever have a chance to meet. reminds me I owe you a bottle of beer.

Jonathan Gibbs

unread,
Nov 4, 2019, 9:00:05 PM11/4/19
to osl...@googlegroups.com
I think something which might be interesting, and productive, is to hear a counter proposal from those who don’t think “evaluate” is a good idea. How do you handle things like triplanar and texture bombing? For triplanar (or other multi-image projections), are you just happy with the kinds of networks which Zap hates, or do you have another solution to manage them? Is that a solution which could become part of OSL itself? For texture bombing, do you not support it, or only support it with images or other built-in functions?

I don’t think anyone is saying these kinds of texturing effects are strictly bad, so I’m wondering what the alternative implementations look like.

—jono


--
You received this message because you are subscribed to the Google Groups "OSL Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to osl-dev+u...@googlegroups.com.

Philippe LePrince

unread,
Nov 5, 2019, 3:12:05 AM11/5/19
to osl...@googlegroups.com, Philippe Leprince
Good point Jonathan.

I am happy with the kind of network Zap hates. I think they clearly expose the complexity and cost of the pattern graph to the end user. 

I went down the route of hiding complexity and ended up with complicated code that allowed people to create very expensive graphs that should have been static textures. Moreover, to hide complexity you often have to introduce conventions (you can do this but you can’t do that) that may seem arbitrary and un-intuitive to the end user.

Finally, when complexity increases, you reach a cutoff point where you have to ask yourself if you shouldn’t switch to a different toolset. It could be a paint package, a OIIO plugin that rasterizes a Substance file on demand to amortise cost, etc.

Granted, Zap’s idea has merits. It simplifies the graph but introduces “semi-magical” behaviour that I find objectionable. Maybe there is another solution to this problem. I would personally leave OSL as it is today and opt for UI simplification by packaging many nodes into one (like a macro or sub-group) with selectively exposed parameters.

Cheers

Philippe Leprince
RenderMan Field Engineer
Pixar Animation Studios

Master Zap

unread,
Nov 16, 2019, 4:50:32 PM11/16/19
to OSL Developers
I had this request today, from a user. "Hey, I love the Randomized Bitmap shader you have, but can't you make it so you can plug in and randomize any map?"

To a user this is nothing strange. 

Not being able to do it is what they perceive as strange...

/Z
Reply all
Reply to author
Forward
0 new messages