Hi,
This is not a post limited to pbrt, and I just have some thoughts and some questions about differentiable rendering.
For inverse rendering problem, we wanna do some kind of optimization in the image domain. Suppose that the whole 3D scene is given, and I could do some rendering using a renderer, I would like to optimize for some geometry or "appearance" parameters(BSDF related) or Emitter parameters. If the rendering process is differentiable, then at least I could use gradient descent to do optimization(or whatever learning). So, I found some works in the computer vision community such as Neural 3d mesh renderer CVPR2018, OpenDr, but they are all based on rasterization and cannot handle global illumination. As far as I know, there is no ray-tracing renderer that could do auto-differentiation so that we could do optimization on top of it. There is some work limited to simple scenes or specific question, but I cannot find any general ray-tracing renderer that could handle this general problem.
I know that ray-tracing is highly stochastic, but if we use quasi-random sampling, is it deterministic? Also, The goal is basically differentiating the rendering equation and I am not sure the whether the occlusion term in the geometry term make it not differentiable, not to mention the volumetric and other advanced techniques.
I think this topic is a quite overwhelming for me now. I just think that if we define some kind of "energy" or "loss function" in the image domain, maybe we could use this idea to optimize for geometry/BSDF/texture/emitter/participating media. My primary question is that, is this "differentiable ray-tracer" meaningful? If it is, is there anyone that working on a similar project like this? If it is implemented based on auto-differentiation, how slow(of course!) would it be? I have no idea the scope of this problem now.
I hope this is not totally "bullshit", and I highly appreciate any reply.
Thank you,
Zejian Wang