How to prevent scaling of entire problem to zero

18 views

Mitja P

Mar 20, 2023, 2:34:16 PM3/20/23
to Ceres Solver
I am trying to calculate correction parameters to correct (equalize) image colors of entire dataset.

Let's assume a have equation to calculate corrected pixel color:
corrected(RGB, parameters) = parameters.exposure * RGB + parameters.brightness;

Then residual error between two pixel colors is:
residual(RGB1, RGB2, parameters1, parameters2) = corrected(RGB2, parameters2) - corrected(RGB1, parameters1)

Residual is defined between some subset of pairs of all images. Think of it like image pairs having a common tie points in bundle adjustment scene forming a graph.

The problem I have with this formulation is that best solution is always to optimize exposure to 0 (zero) which makes all pixels black and in fact minimizes residual error. What would be the best solution to prevent this?

One way I can think of is to introduce dynamic cost fuction that would force that the sum of all exposure values remains the same over entire optimization process. This sounds a bit like a hack and would be hard to get the best weight for this cost function that the algorithm will behave the same way regardless of the problem size.

Is it possible to use something similar to Sphere Manifold but on same parameter across multiple parameter blocks? That would certainly force the algorithm to keep the sum of exposure values the same as this is how tangent space is defined.

One partial solution is to make single exposure parameter constant and other would optimize to that value. But that has less and less effect as the problem grows due to many images not having cost function that are not directly connected with constant parameter block. Those parameters will still converge toward zero - more with increasing (graph) distance between locked parameters and parameter block in question.

Another solution I came up with is to do incremental optimizations. Start with one parameter block constant and add and optimize only adjacent parameters in graph. After every step make all optimized parameters constant and add new parameters.

Am I missing something? I've come across similar issue multiple times solving different problems.

I can image the same problem would arise in multi-view ICP algorithm where one "point cloud" would be made constant and other would move, rotate and scale to fit. What would prevent the floating "point clouds" to scale to zero where errors (distances) would be minimal.  Especially when there are many "point clouds" in a situation like SLAM.

Regards, Mitja.

Sameer Agarwal

Mar 20, 2023, 2:37:22 PM3/20/23
Mitja,

On Mon, Mar 20, 2023 at 11:34 AM Mitja P <mit...@gmail.com> wrote:
I am trying to calculate correction parameters to correct (equalize) image colors of entire dataset.

Let's assume a have equation to calculate corrected pixel color:
corrected(RGB, parameters) = parameters.exposure * RGB + parameters.brightness;

Then residual error between two pixel colors is:
residual(RGB1, RGB2, parameters1, parameters2) = corrected(RGB2, parameters2) - corrected(RGB1, parameters1)

As you have rightly identified, the problem here is the objective function which has an optimum at zero. So the problem so solve here is not at the level of ceres but the mathematical formulation itself. For example, don't you want your corrected images to still look like the original images? shouldn't you have a term which tries to preserve some property of the original RGB image?

Sameer

Residual is defined between some subset of pairs of all images. Think of it like image pairs having a common tie points in bundle adjustment scene forming a graph.

The problem I have with this formulation is that best solution is always to optimize exposure to 0 (zero) which makes all pixels black and in fact minimizes residual error. What would be the best solution to prevent this?

One way I can think of is to introduce dynamic cost fuction that would force that the sum of all exposure values remains the same over entire optimization process. This sounds a bit like a hack and would be hard to get the best weight for this cost function that the algorithm will behave the same way regardless of the problem size.

Is it possible to use something similar to Sphere Manifold but on same parameter across multiple parameter blocks? That would certainly force the algorithm to keep the sum of exposure values the same as this is how tangent space is defined.

One partial solution is to make single exposure parameter constant and other would optimize to that value. But that has less and less effect as the problem grows due to many images not having cost function that are not directly connected with constant parameter block. Those parameters will still converge toward zero - more with increasing (graph) distance between locked parameters and parameter block in question.

Another solution I came up with is to do incremental optimizations. Start with one parameter block constant and add and optimize only adjacent parameters in graph. After every step make all optimized parameters constant and add new parameters.

Am I missing something? I've come across similar issue multiple times solving different problems.

I can image the same problem would arise in multi-view ICP algorithm where one "point cloud" would be made constant and other would move, rotate and scale to fit. What would prevent the floating "point clouds" to scale to zero where errors (distances) would be minimal.  Especially when there are many "point clouds" in a situation like SLAM.

Regards, Mitja.

--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.