"position cam" AOV

110 views
Skip to first unread message

Riccardo Cecchinato

unread,
Feb 14, 2025, 6:36:54 AMFeb 14
to gaffer-dev
Hello dear Gaffer devs!

I am trying to setup a custom AOV for compositing, that allows them to sample the position of each object of the scene relative to the render camera.

In maya I would use the samplerinfo node using the "camera position" outputs, but in Gaffer 1.5.2 I am not able to find an equivalent node.

I played around with the Arnold "Utility" node in connection with "SpaceTransform", trying to convert the p info from world to camera, but I cannot get the correct result. Probably I am not using the SpaceTransform node in the correct way.

Do you have any suggestions/ideas?

Thank you in advance for your help,

Riccardo
 

Sachin Shrestha

unread,
Feb 15, 2025, 3:52:10 AMFeb 15
to gaffer-dev
Hi Riccardo,

You will probably need to set the input type in the SpaceTransform node to vector for tit work correctly. Attached is a snapshot with the regular world P from utility on the left and the camera P on the right. SpaceTransform settings are on the bottom right node editor.

P_camera.png

Hope this helps.

-Sachin

Riccardo Cecchinato

unread,
Feb 17, 2025, 4:00:46 AMFeb 17
to gaffe...@googlegroups.com
Hello Sachin,
thanks a lot for your help!

I already got that result but - as far as I can tell - it is not the correct behaviour.
Normally the "position to cam" AOV should look something like this (confirmed by compositing):
image.png

The shading node being:
image.png

Does it ring any bells?

Thank you again.

Riccardo


--
You received this message because you are subscribed to the Google Groups "gaffer-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gaffer-dev+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/gaffer-dev/2628392f-b8fc-41f2-bd93-105604dc58fcn%40googlegroups.com.


--


Riccardo Cecchinato

Lead Lighting

stimstudio.com

       







Paolo DE LUCIA

unread,
Feb 17, 2025, 4:18:40 AMFeb 17
to gaffer-dev
Position to screen in the spaceTransform :


import Gaffer
import GafferArnold
import GafferScene
import IECore
import imath

Gaffer.Metadata.registerValue( parent, "serialiser:milestoneVersion", 1, persistent=False )
Gaffer.Metadata.registerValue( parent, "serialiser:majorVersion", 4, persistent=False )
Gaffer.Metadata.registerValue( parent, "serialiser:minorVersion", 11, persistent=False )
Gaffer.Metadata.registerValue( parent, "serialiser:patchVersion", 0, persistent=False )

__children = {}

__children["StateVector"] = GafferArnold.ArnoldShader( "StateVector" )
parent.addChild( __children["StateVector"] )
__children["StateVector"].loadShader( "state_vector" )
__children["StateVector"].addChild( Gaffer.V2fPlug( "__uiPosition", defaultValue = imath.V2f( 0, 0 ), flags = Gaffer.Plug.Flags.Default | Gaffer.Plug.Flags.Dynamic, ) )
__children["SpaceTransform"] = GafferArnold.ArnoldShader( "SpaceTransform" )
parent.addChild( __children["SpaceTransform"] )
__children["SpaceTransform"].loadShader( "space_transform" )
__children["SpaceTransform"].addChild( Gaffer.V2fPlug( "__uiPosition", defaultValue = imath.V2f( 0, 0 ), flags = Gaffer.Plug.Flags.Default | Gaffer.Plug.Flags.Dynamic, ) )
__children["ShaderAssignment4"] = GafferScene.ShaderAssignment( "ShaderAssignment4" )
parent.addChild( __children["ShaderAssignment4"] )
__children["ShaderAssignment4"].addChild( Gaffer.V2fPlug( "__uiPosition", defaultValue = imath.V2f( 0, 0 ), flags = Gaffer.Plug.Flags.Default | Gaffer.Plug.Flags.Dynamic, ) )
__children["PathFilter59"] = GafferScene.PathFilter( "PathFilter59" )
parent.addChild( __children["PathFilter59"] )
__children["PathFilter59"].addChild( Gaffer.V2fPlug( "__uiPosition", defaultValue = imath.V2f( 0, 0 ), flags = Gaffer.Plug.Flags.Default | Gaffer.Plug.Flags.Dynamic, ) )
__children["Flat"] = GafferArnold.ArnoldShader( "Flat" )
parent.addChild( __children["Flat"] )
__children["Flat"].loadShader( "flat" )
__children["Flat"].addChild( Gaffer.V2fPlug( "__uiPosition", defaultValue = imath.V2f( 0, 0 ), flags = Gaffer.Plug.Flags.Default | Gaffer.Plug.Flags.Dynamic, ) )
__children["StateVector"]["parameters"]["variable"].setValue( 'P' )
__children["StateVector"]["__uiPosition"].setValue( imath.V2f( -346.135742, -420.145081 ) )
__children["SpaceTransform"]["parameters"]["input"].setInput( __children["StateVector"]["out"] )
__children["SpaceTransform"]["parameters"]["type"].setValue( 'vector' )
__children["SpaceTransform"]["parameters"]["to"].setValue( 'screen' )
Gaffer.Metadata.registerValue( __children["SpaceTransform"]["out"], 'compoundNumericNodule:childrenVisible', True )
__children["SpaceTransform"]["__uiPosition"].setValue( imath.V2f( -333.866455, -421.945068 ) )
__children["ShaderAssignment4"]["filter"].setInput( __children["PathFilter59"]["out"] )
__children["ShaderAssignment4"]["shader"].setInput( __children["Flat"]["out"] )
__children["ShaderAssignment4"]["__uiPosition"].setValue( imath.V2f( -310.226074, -421.945068 ) )
__children["PathFilter59"]["paths"].setValue( IECore.StringVectorData( [ '.../...' ] ) )
__children["PathFilter59"]["__uiPosition"].setValue( imath.V2f( -294.41745, -417.26236 ) )
__children["Flat"]["parameters"]["color"]["b"].setValue( 0.0 )
Gaffer.Metadata.registerValue( __children["Flat"]["parameters"]["color"], 'compoundNumericNodule:childrenVisible', True )
__children["Flat"]["parameters"]["color"]["r"].setInput( __children["SpaceTransform"]["out"]["x"] )
__children["Flat"]["parameters"]["color"]["g"].setInput( __children["SpaceTransform"]["out"]["y"] )
__children["Flat"]["__uiPosition"].setValue( imath.V2f( -323.866699, -421.945068 ) )


del __children

Paolo DE LUCIA

unread,
Feb 17, 2025, 4:28:56 AMFeb 17
to gaffer-dev
Althought I don't see the point of this pass that can be generated in comp with a simple expression, multiplied by the alpha.
r = (x/(width/2)-1)
g = (y/(height/2)-1)

Sachin Shrestha

unread,
Feb 17, 2025, 4:33:40 AMFeb 17
to gaffe...@googlegroups.com
Aah yes, I think the utility shader's P is relative to the bbox so your state vector node is the correct answer. Or the oslInPoint node would do as well.

Riccardo Cecchinato

unread,
Feb 17, 2025, 5:05:56 AMFeb 17
to gaffe...@googlegroups.com
Hello Paolo,
thanks a lot, your node tree gives a result much closer to what I seek, thank you!

I understand your second point about generating the pass in comp, but I tested both methods and compared them, and I can clearly see a difference between the two, both visually and in values (render on the left, nuke on the right, same render pass):
image.png image.png

Not sure why this difference happens. Do you have any idea about this different behaviour? 
I must admit I do not know much about the topic, it is a compositing request I need to deliver them asap, I had no time to do much research (even if I understand the theoretical usage for it).

Sachin,
thanks again and I'll test the oslInPoint too!

Riccardo

Paolo DE LUCIA

unread,
Feb 18, 2025, 8:41:45 AMFeb 18
to gaffer-dev
Ah my bad,
Because it's world space units converted to screen space, a unit looks bigger closer to the camera than far away... the variation of value results from the perspective.
Thus the 2d trick is not an option.

Vinicius Villela

unread,
Feb 19, 2025, 12:53:26 AMFeb 19
to gaffe...@googlegroups.com
I don't know if this helps, but this is how I configure position pass.

image.png




--
Att. Vinicius Villela
.
Gallery
Reply all
Reply to author
Forward
0 new messages