Hi,
I've been playing around an integration of OCIO using the GPU path (latest code version build 50) and things are going relatively smoothly. There's one problem that I still have though and it's when I compare conversion result to a conversion made in Nuke.
I'm using the Nuke-Default configuration and playing with Linear to SRGB conversion.
colorspaces:
- !<ColorSpace>
name: linear
family: ""
equalitygroup: ""
bitdepth: 32f
description: |
Scene-linear, high dynamic range. Used for rendering and compositing.
isdata: false
allocation: lg2
allocationvars: [-15, 6]
- !<ColorSpace>
name: sRGB
family: ""
equalitygroup: ""
bitdepth: 32f
description: |
Standard RGB Display Space
isdata: false
allocation: uniform
allocationvars: [-0.125, 1.125]
to_reference: !<FileTransform> {src: srgb.spi1d, interpolation: linear}
My source image is Marci_512_linear.exr that comes from the reference images. I have applied a Linear to SRGB transform in Nuke and saved an EXR 32 bits uncompressed out of it.
On my side, I have applied the same transform but using the GPU path.
In the hair region, where pixels are well over 1.0, I don't have the same result. My output has clipped to 1.0 and the one from Nuke clipped to 1.25. Samething happens for pixels below 0.0. My result has clipped to 0.0 but in Nuke's output, I can see sub 0.0 values. It looks like the allocation vars aren't taken into account or something.
My shader output looks like this and the language used is HLSL_DX11 :
Texture2D ociolut1d_0;
SamplerState ociolut1d_0Sampler;
float2 ociolut1d_0_computePos(float f)
{
float dep = min(f, 1.0) * 65535.;
float2 retVal;
retVal.y = float(int(dep / 4095.));
retVal.x = dep - retVal.y * 4095.;
retVal.x = (retVal.x + 0.5) / 4096.;
retVal.y = (retVal.y + 0.5) / 17.;
return retVal;
}
float4 OCIOConvert(in float4 inPixel)
{
float4 outColor = inPixel;
outColor.r = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.r)).r;
outColor.g = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.g)).g;
outColor.b = ociolut1d_0.Sample(ociolut1d_0Sampler, ociolut1d_0_computePos(outColor.b)).b;
return outColor;
}
If anyone ever had that issue, I'll be happy to hear it out :)
Thanks!