I am trying to calculate correction parameters to correct (equalize) image colors of entire dataset.
Let's assume a have equation to calculate corrected pixel color:
corrected(RGB, parameters) = parameters.exposure * RGB + parameters.brightness;
Then residual error between two pixel colors is:
residual(RGB1, RGB2, parameters1, parameters2) = corrected(RGB2, parameters2) - corrected(RGB1, parameters1)
Residual is defined between some subset of pairs of all images. Think of it like image pairs having a common tie points in bundle adjustment scene forming a graph.
The problem I have with this formulation is that best solution is always to optimize exposure to 0 (zero) which makes all pixels black and in fact minimizes residual error. What would be the best solution to prevent this?
One way I can think of is to introduce dynamic cost fuction that would force that the sum of all exposure values remains the same over entire optimization process. This sounds a bit like a hack and would be hard to get the best weight for this cost function that the algorithm will behave the same way regardless of the problem size.
Is it possible to use something similar to Sphere Manifold but on same parameter across multiple parameter blocks? That would certainly force the algorithm to keep the sum of exposure values the same as this is how tangent space is defined.
One partial solution is to make single exposure parameter constant and other would optimize to that value. But that has less and less effect as the problem grows due to many images not having cost function that are not directly connected with constant parameter block. Those parameters will still converge toward zero - more with increasing (graph) distance between locked parameters and parameter block in question.
Another solution I came up with is to do incremental optimizations. Start with one parameter block constant and add and optimize only adjacent parameters in graph. After every step make all optimized parameters constant and add new parameters.
Am I missing something? I've come across similar issue multiple times solving different problems.
I can image the same problem would arise in multi-view ICP algorithm where one "point cloud" would be made constant and other would move, rotate and scale to fit. What would prevent the floating "point clouds" to scale to zero where errors (distances) would be minimal. Especially when there are many "point clouds" in a situation like SLAM.
Regards, Mitja.