Hello,
I am trying to understand why I get different optimization solutions in these two cases with identical data:
1. Running the solver with options.max_num_iterations = 500
2. Running the solver with options.max_num_iterations = 1 and running this m times until convergence where the output to each stage is fed as the initial values for the next step.
Basically I was trying to display the output at each iteration to show how the curve fitting algorithm finally converges. And then I realized that I am getting different final answers in these two cases.
Particular details for my problem: Running in Visual Studio C++ 2013. For my particular example the solver converges in around 25-50 iterations. I am using DENSE_QR as the linear solver type.
Solver::Options options;
options.max_num_iterations = iterations;
options.linear_solver_type = ceres::DENSE_QR;
options.minimizer_progress_to_stdout = true;
Problem problem;
while (a.next()) {
problem.AddResidualBlock(new AutoDiffCostFunction<const model_t::case1_t, 1, 1, 1, 1>((const model_t::case1_t *)case1_f), NULL,
&(*m1)[0], &(*m2)[0], &(*m3)[0]);
}
set_bounds(problem, *m1, 0.5, 1.0);
set_bounds(problem, *m2, M2lowValue, M2highValue);
set_bounds(problem, *m3, M3lowValue, M3highValue);
Solver::Summary summary;
Solve(options, &problem, &summary);
Thank you.
Regards,
Adit