cloth specular color

344 views
Skip to first unread message

Derek Flood

unread,
Mar 21, 2017, 5:11:02 PM3/21/17
to alshaders
Cloth materials are often done by making the spec color a brighter version of the diffuse color. Normally  spec would be colorless for a  dielectric material. I'm wondering if you know why cloth is an exception to that rule and has colored spec the way a metal would. Is this based on physically based principles or is it just an artistic trick that looks good?

Anders Langlands

unread,
Mar 21, 2017, 10:17:20 PM3/21/17
to Derek Flood, alshaders
Cloth is basically a collection of fibres. As such on a micro scale, it's quite similar to hair in that you get a direct (white) specular highlight and a TRT "secondary" highlight that's coloured by whatever pigmentation is present in the fibre. The way that these highlights interplay is heavily dependent on the weave of the cloth, which is where the complexity comes in. 


On Wed, Mar 22, 2017 at 10:11 Derek Flood <dere...@gmail.com> wrote:
Cloth materials are often done by making the spec color a brighter version of the diffuse color. Normally  spec would be colorless for a  dielectric material. I'm wondering if you know why cloth is an exception to that rule and has colored spec the way a metal would. Is this based on physically based principles or is it just an artistic trick that looks good?

--
You received this message because you are subscribed to the Google Groups "alshaders" group.
To unsubscribe from this group and stop receiving emails from it, send an email to alshaders+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Derek Flood

unread,
Mar 22, 2017, 2:46:22 AM3/22/17
to alshaders, dere...@gmail.com
Ah, yes the hair/fabric fibre analogy makes a lot of sense, thanks Anders! So the TRT is back-scattered light that got internally reflected, thus picking up the color. Fascinating.

Orkhan Ashrafov

unread,
Mar 22, 2017, 6:24:58 AM3/22/17
to Derek Flood, alshaders

What about using facing ratio and ramp fot diffuse only and setting specular to white. Does it mean technically this method is not physically plausible?

Derek Flood

unread,
Mar 22, 2017, 2:37:05 PM3/22/17
to alshaders
From what I understand (and honestly this is rather above my head as an artist so it's possible I am misunderstanding), hair and cloth shaders rely on there being something more than a typical surface geo. In the case of hair, small tube geo is generated which the hair material is applied to. In the case of fabric one would similarly need to have a way of generating the fibers which weave together to create the fabric. This is not done with geo like hair is (perhaps because the fibers are too small?) and so various approaches have been proposed, for example creating the woven fibers with voxels.

That's all to say that a shader which would be applied to a simple surface geo (say a model of a shirt) would not really be able to use the idea of TT (transmission through the fiber) and TRT (internal reflections inside the fibers) since it is just one big fat model of a shirt. At best one would be able to mimic the appearance of the cloth at a distance where the weave of the fabric cannot be seen. For that one would apply a facing ratio ramp to the diffuse with a brighter color on the incident angle, simulating the fibers picking up light at glancing angles (this is sort of TT), and also tint the spec with the diffuse color to get a color that is a bit brighter than the diffuse ( sort of TRT). As far as the primary spec (R), I'd say that while some cloth like silk have this "shine" to them, many fabrics such as cotton have no discernible white spec.

Anders Langlands

unread,
Mar 22, 2017, 4:22:10 PM3/22/17
to Derek Flood, alshaders
You see the white highlight clearly on all fabrics, but because the macro-scale effect of the interwoven fibres creates a fairly rough highlight it usually reads as a desaturation of the dye colour. For instance: http://www.mrwholesale.net/image/cache/data/women/v-neck/women-v-neck-top-royal-blue-t-shirt-845x1000.jpg

You can absolutely use the same lobes as you do in a hair shader for cloth. One way to represent this is conceptually just to cover the surface in a tiled (subpixel) patch of weave pattern, defined as a series of interlocking curves. At the shading point you then work out the tangent of each of the curves under the pixel and essentially just shade them as hairs, then sum their contributions (also accounting for inter-fibre shadowing and visibility). The best examples of this are Irawan (http://www.graphics.cornell.edu/pubs/2006/IM06.pdf) and Sadeghi (http://graphics.ucsd.edu/~iman/a_practical_microcylinder_appearance_model_for_cloth_rendering.php - site seems to be down currently). I've had a hybrid of those two on my to-do list for a long time now, but held up by the sampling.

I don't know of any work that's tried to address inter-fibre scattering without going into voxel representations, but that would be interesting to explore cheats for it.

Cheers,

Anders

On Thu, 23 Mar 2017 at 07:37 Derek Flood <dere...@gmail.com> wrote:
From what I understand (and honestly this is rather above my head as an artist so it's possible I am misunderstanding), hair and cloth shaders rely on there being something more than a typical surface geo. In the case of hair, small tube geo is generated which the hair material is applied to. In the case of fabric one would similarly need to have a way of generating the fibers which weave together to create the fabric. This is not done with geo like hair is (perhaps because the fibers are too small?) and so various approaches have been proposed, for example creating the woven fibers with voxels.

That's all to say that a shader which would be applied to a simple surface geo (say a model of a shirt) would not really be able to use the idea of TT (transmission through the fiber) and TRT (internal reflections inside the fibers) since it is just one big fat model of a shirt. At best one would be able to mimic the appearance of the cloth at a distance where the weave of the fabric cannot be seen. For that one would apply a facing ratio ramp to the diffuse with a brighter color on the incident angle, simulating the fibers picking up light at glancing angles (this is sort of TT), and also tint the spec with the diffuse color to get a color that is a bit brighter than the diffuse ( sort of TRT). As far as the primary spec (R), I'd say that while some cloth like silk have this "shine" to them, many fabrics such as cotton have no discernible white spec.

--

Derek Flood

unread,
Mar 22, 2017, 6:46:53 PM3/22/17
to alshaders, dere...@gmail.com


"because the macro-scale effect of the interwoven fibres creates a fairly rough highlight it usually reads as a desaturation of the dye colour"

Right, that's what I was attempting to convey in my OP. Do you know of an explanation of why the white spec is effectively "tinted" by the dye color?  This is not what happens on more common dielectrics where the spec is colorless. So I'd be interested in understanding what is going on here theoretically to tint the spec with the dye color.

Thanks for the links, I'll read up on those (hopefully the one will be up again soon). I was actaully unable to find the Irawan paper, so it's great that you linked to it, thanks!

Anders Langlands

unread,
Mar 22, 2017, 7:10:41 PM3/22/17
to Derek Flood, alshaders
The spec is still white, but the colour you see under each pixel is the sum of direct reflections (white) and many different orders of scattering which will have varying degrees of colouring due to dye absorption. So the net result is white + red = pink, for example.

--

Derek Flood

unread,
Mar 22, 2017, 8:28:16 PM3/22/17
to alshaders, dere...@gmail.com
So would this tinted spec (ie. white spec that is so dim that it appears tinted) be the "R" and be in addition to the colored "secondary spec" of the "TRT"? So essentially a double spec like on the ALsurface, both being tinted? Or is this not something that can be captured properly with a BRDF like ALsurface as it is currently implemented (i.e. by simply tweaking the shader attributes)?

Derek Flood

unread,
Mar 22, 2017, 9:43:16 PM3/22/17
to alshaders
Anders, the Irawan/Marschner paper refers to a accompanying video, but I was not able to find it on the Cornell website. Do you have a link?

Derek Flood

unread,
Mar 23, 2017, 1:55:27 PM3/23/17
to alshaders
"I've had a hybrid of those two on my to-do list for a long time now, but held up by the sampling."

Yes, as I read the first paper I noticed he kept saying that the implementation is simple and fast, but I recall you saying before (and here too) that it was actually difficult (because of the sampling). LOL.

He mentions having both a BTF for close-up details, and a BRDF for the far away look. From what I understand you to be saying here, the idea would be to have the BRDF driven by the shading info from the BTF. Am I getting that right basically? As a user I can say that getting the BRDF part right is actually not that hard with existing shaders (like ALsurface). The challenge comes in trying to get the thread detail with only bump maps, which seems to stretch them beyond what they are reasonably capable of. So it's really the BTF part that is the missing piece, more than the BRDF, at least that how I see it from my perspective as an artist.

Is this a similar approach to the two papers you sited? Or is it a different approach altogether?

Philip Child

unread,
Mar 23, 2017, 2:23:03 PM3/23/17
to Derek Flood, alshaders
This is an area close to my heart having implemented the cloth shading on Brave. On that we ray marched into the surface to generate the volumetric details from an implicit function that described the fibres. Because this was a distance field we could also use the distance info to calculate inter-fibre shading such as macro occlusion. So it was possible to generate huge amounts of detail on the fly with no actual geometry. Much like Anders says - you don't really need geom as long as you work out what the result would be if there really was geom! There is a great Disney paper from 101 Dalmatians Production which gives some nice examples of this. https://www.google.co.uk/url?sa=t&source=web&rct=j&url=http://danbgoldman.com/misc/fakefur/fakefur.pdf&ved=0ahUKEwjFxIyonu3SAhVKJMAKHXUgB74QFggaMAA&usg=AFQjCNGIVdHVnqDkBOGUQaM3645KQ2y3XA&sig2=AmpKMjIbqeejTGVb_JzetA

We did also generate curve geometry​ as well and ended up with a combination technique. Mainly because of the difficulty in antialiasing the tiny details without a low shading rate.

Philip

--
You received this message because you are subscribed to the Google Groups "alshaders" group.
To unsubscribe from this group and stop receiving emails from it, send an email to alshaders+unsubscribe@googlegroups.com.

Anders Langlands

unread,
Mar 23, 2017, 2:57:01 PM3/23/17
to Philip Child, Derek Flood, alshaders
Very interesting Philip, thanks! How did you generate the distance field that you passed to the shader? 

To unsubscribe from this group and stop receiving emails from it, send an email to alshaders+...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "alshaders" group.
To unsubscribe from this group and stop receiving emails from it, send an email to alshaders+...@googlegroups.com.

Philip Child

unread,
Mar 23, 2017, 4:08:33 PM3/23/17
to Anders Langlands, Derek Flood, alshaders
I gave a talk about it at Siggraph a few years ago but annoyingly I can't find much online to share.
Here's a few bits:
Jump to : Ill-Loom-inating Handmade Fabric in “Brave”
There's a cutdown clip from the BluRay extras which shows one of my early tests (no transparency in this one so the fibres look quite hard):
What you can see here is the basis of the distance field. It's just deformed tubes (so closest point to a section of a tube function) which are generated in tangent space and twisted around. Then a sub process repeating this for the sub fibres. Most of the work is just converting back and forth between polar and Cartesian coordinates to work out which fibre we should be looking up at any time. 
It's highly efficient because it only calculates tiny parts of the distance field at every ray march sample point. It's broken down in to very basic functions actually but quite hard to describe without diagrams!
Anyway, that distance field is great for faking macro-level shading cheaply as it's all implicit.

Happy to discuss more if people are interested,
P


On 23 March 2017 at 18:56, Anders Langlands <andersl...@gmail.com> wrote:
Very interesting Philip, thanks! How did you generate the distance field that you passed to the shader? 
On Fri, 24 Mar 2017 at 07:23 Philip Child <phili...@gmail.com> wrote:
This is an area close to my heart having implemented the cloth shading on Brave. On that we ray marched into the surface to generate the volumetric details from an implicit function that described the fibres. Because this was a distance field we could also use the distance info to calculate inter-fibre shading such as macro occlusion. So it was possible to generate huge amounts of detail on the fly with no actual geometry. Much like Anders says - you don't really need geom as long as you work out what the result would be if there really was geom! There is a great Disney paper from 101 Dalmatians Production which gives some nice examples of this. https://www.google.co.uk/url?sa=t&source=web&rct=j&url=http://danbgoldman.com/misc/fakefur/fakefur.pdf&ved=0ahUKEwjFxIyonu3SAhVKJMAKHXUgB74QFggaMAA&usg=AFQjCNGIVdHVnqDkBOGUQaM3645KQ2y3XA&sig2=AmpKMjIbqeejTGVb_JzetA

We did also generate curve geometry​ as well and ended up with a combination technique. Mainly because of the difficulty in antialiasing the tiny details without a low shading rate.

Philip
On 23 Mar 2017 5:55 pm, "Derek Flood" <dere...@gmail.com> wrote:
"I've had a hybrid of those two on my to-do list for a long time now, but held up by the sampling."

Yes, as I read the first paper I noticed he kept saying that the implementation is simple and fast, but I recall you saying before (and here too) that it was actually difficult (because of the sampling). LOL.

He mentions having both a BTF for close-up details, and a BRDF for the far away look. From what I understand you to be saying here, the idea would be to have the BRDF driven by the shading info from the BTF. Am I getting that right basically? As a user I can say that getting the BRDF part right is actually not that hard with existing shaders (like ALsurface). The challenge comes in trying to get the thread detail with only bump maps, which seems to stretch them beyond what they are reasonably capable of. So it's really the BTF part that is the missing piece, more than the BRDF, at least that how I see it from my perspective as an artist.

Is this a similar approach to the two papers you sited? Or is it a different approach altogether?

--
You received this message because you are subscribed to the Google Groups "alshaders" group.
To unsubscribe from this group and stop receiving emails from it, send an email to alshaders+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "alshaders" group.
To unsubscribe from this group and stop receiving emails from it, send an email to alshaders+unsubscribe@googlegroups.com.

Philip Child

unread,
Mar 23, 2017, 4:13:50 PM3/23/17
to Anders Langlands, Derek Flood, alshaders
Oh yes, and if you can be bothered to read this, quite dry document, there is a description of it here:

Derek Flood

unread,
Mar 23, 2017, 11:35:11 PM3/23/17
to alshaders, andersl...@gmail.com, dere...@gmail.com
Really fascinating stuff Philip! I'd love to hear more.
Lot's of Pixar stuff is up on graphics.pixar.com. I wonder why your Siggraph talk is not there too? I'd love to see it. :)
To unsubscribe from this group and stop receiving emails from it, send an email to alshaders+...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "alshaders" group.
To unsubscribe from this group and stop receiving emails from it, send an email to alshaders+...@googlegroups.com.

Philip Child

unread,
Mar 24, 2017, 2:52:11 AM3/24/17
to Derek Flood, alshaders, andersl...@gmail.com
I think that site is just for published papers rather than talks. I know many big studio SIGGRAPH talks are allowed to be recorded.
I'll see what sharable info I can dig out.
P

To unsubscribe from this group and stop receiving emails from it, send an email to alshaders+unsubscribe@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages