Using the Ceres-solver with OpenCV Mats

1,189 views
Skip to first unread message

Nick Rosetti

unread,
Aug 31, 2016, 1:39:00 PM8/31/16
to Ceres Solver

I am currently attempting to use the ceres-solver with OpenCV to optimize a rough computation on an image.


There are 110 related parameters that have been roughly computed in an initial step. Assuming a 500x500 image, the two other relevant matrices V and V2 are size 250,000-by-110 and 250,000-by-1. V and V2 are constants. V is my system matrix and V2 is an observed matrix.


The residual is more or less computed by summing the result of ((V * parameters) - V2). This seems like a simple task yet I am struggling to understand how to set up the inputs properly. I am trying to avoid restructuring V and V2 into other data types than a cv::Mat.


I know this may be rather simple, but does anyone have thoughts on how to best use ceres for a problem like this?

Jakob Kirkegaard

unread,
Sep 1, 2016, 2:52:47 AM9/1/16
to Ceres Solver
If you rely on auto differentiation I think you need to use Eigen matrices instead.


As Ceres and Eigen generally plays along nicely, I try to keep all Ceres related matrices in Eigen format, although a bit of copying may be required at times.

If you are not relying on auto differentiation, why not just perform the calculations using the cv::Mat's  and set the residuals?

-- 
Jakob

Luis-Jorge Romeo

unread,
Sep 1, 2016, 3:39:15 AM9/1/16
to ceres-...@googlegroups.com
Hi, I've used opencv Mat in ceres.

Simplest way for me is using Numerical diferentiation and directly use Mat operations inside. I assume the V and V2 mats are constants, so you can pass them in the constructor of the ResidualBlock.

To get the parameters Mat inside residual function, you can use a Mat constructor like 

Mat::Mat(Size size, int type, void* data, size_t step=AUTO_STEP)

directly with the pointer to ceres parameters.

Hope it helps

--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/3141deef-669a-4e8e-9561-e8bbdf59b9f6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Nick Rosetti

unread,
Sep 1, 2016, 9:13:27 AM9/1/16
to Ceres Solver
I ended up using numeric differentiation and using Mat operations. I simply made the constant matrices members of the struct containing the functor. However, I am now running into an issue where the cost output from the summary is orders of magnitude larger than  what the residual is being set to. For example, on my first set of images the residual is ~2e4 and the cost is 1.67e8. Am I misunderstanding the cost that I am seeing? This is resulting in a high number of unsuccessful iterations. Any thoughts?

Nick Rosetti

unread,
Sep 1, 2016, 9:33:37 AM9/1/16
to Ceres Solver
I realized that it is most likely just the way that Ceres computes the cost. I am still getting a high number of unsuccessful iterations though. It is a bit frustrating as I was able to write some Matlab that used Levenburg Marquardt optimization quickly, and I have spent several days porting it to C++ to no avail. 

Luis-Jorge Romeo

unread,
Sep 2, 2016, 8:58:48 AM9/2/16
to ceres-...@googlegroups.com
Can you post your cost functor code?

You can also check that the residuals you are getting, at least in the first iteration, are the same than Matlab's to get a clue about the failure origin...

--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver+unsubscribe@googlegroups.com.

Nick Rosetti

unread,
Sep 2, 2016, 10:16:00 AM9/2/16
to Ceres Solver
I will attach the output of the solver below. Here is my code. The residual calculation seems to be correct when compared to Matlab.

Setup code:
CostFunction* cost_function = new NumericDiffCostFunction<costFunctor, ceres::CENTRAL, 1, numParameters/* 55 */ * 2>(min);

problem.AddResidualBlock(cost_function, NULL, params);

Solver::Options options;
options.trust_region_strategy_type = ceres::LEVENBERG_MARQUARDT;
options.max_num_iterations = 15;

options.minimizer_progress_to_stdout = true;
Solver::Summary summary;
Solve(options, &problem, &summary);


Cost Functor
bool operator()(const double* const params, double* residual) const {
cv::Mat a(55, 1, CV_32F), b(55, 1, CV_32F);
for(int i = 0; i < 55; i++)
a.at<float>(i) = static_cast<float>(params[i]);
for(int i = 0; i < 55; i++)
b.at<float>(i) = static_cast<float>(params[i + 55]);
/*some math*/

double res = static_cast<double>(cv::sum(pow(error, 2))[0]);

residual[0] = res;

return true;
}


Output from Ceres

iter   cost            cost_change  |gradient|  | step|     tr_ratio   tr_radius  ls_iter    iter_time    total_time
   
0  3.579274e+008    0.00e+000    1.35e+013   0.00e+000   0.00e+000  1.00e+004        0    8.37e+000    8.37e+000
   
1  3.579274e+008   -4.25e+009    0.00e+000   5.17e+001  -1.19e+001  5.00e+003        1    4.38e-002    8.42e+000
   
2  3.579274e+008   -4.25e+009    0.00e+000   5.17e+001  -1.19e+001  1.25e+003        1    3.90e-002    8.47e+000
   
3  3.579274e+008   -4.25e+009    0.00e+000   5.17e+001  -1.19e+001  1.56e+002        1    4.17e-002    8.52e+000
   
4  3.579274e+008   -4.25e+009    0.00e+000   5.17e+001  -1.19e+001  9.77e+000        1    4.05e-002    8.56e+000
   
5  3.579274e+008   -4.25e+009    0.00e+000   5.16e+001  -1.19e+001  3.05e-001        1    3.96e-002    8.61e+000
   
6  3.579274e+008   -4.25e+009    0.00e+000   5.02e+001  -1.19e+001  4.77e-003        1    3.85e-002    8.66e+000
   
7  3.579274e+008   -3.97e+009    0.00e+000   1.78e+001  -1.95e+001  3.73e-005        1    4.15e-002    8.71e+000
   
8  3.570205e+008    9.07e+005    1.42e+013   2.11e-001   3.11e-001  3.53e-005        1    8.06e+000    1.68e+001
   
9  3.570205e+008   -4.14e+008    0.00e+000   1.04e+000  -7.27e+001  1.77e-005        1    3.32e-002    1.68e+001
 
10  3.570205e+008   -1.07e+008    0.00e+000   5.22e-001  -2.47e+001  4.42e-006        1    3.41e-002    1.69e+001
 
11  3.570205e+008   -6.99e+006    0.00e+000   1.31e-001  -1.87e+000  5.52e-007        1    3.57e-002    1.69e+001
 
12  3.570931e+008   -7.26e+004    1.37e+013   1.63e-002   2.82e-001  5.10e-007        1    7.33e+000    2.42e+001
 
13  3.570547e+008    3.84e+004    1.37e+013   2.54e-003   9.59e-001  1.53e-006        1    7.72e+000    3.20e+001
 
14  3.569429e+008    1.12e+005    1.37e+013   8.53e-003   9.30e-001  4.21e-006        1    7.16e+000    3.91e+001
 
15  3.565908e+008    3.52e+005    1.34e+013   3.01e-002   1.07e+000  1.26e-005        1    7.02e+000    4.61e+001


Solver Summary (v 1.11.0-eigen-(3.2.9)-no_lapack-cxsparse-(2.3.0)-openmp)


                                     
Original                  Reduced
Parameter blocks                            1                        1
Parameters                                110                      110
Residual blocks                             1                        1
Residual                                    1                        1


Minimizer                        TRUST_REGION


Sparse linear algebra library       CX_SPARSE
Trust region strategy     LEVENBERG_MARQUARDT


                                       
Given                     Used
Linear solver          SPARSE_NORMAL_CHOLESKY   SPARSE_NORMAL_CHOLESKY
Threads                                     1                        1
Linear solver threads                       1                        1


Cost:
Initial                         3.579274e+008
Final                           3.565908e+008
Change                          1.336553e+006


Minimizer iterations                       15
Successful steps                            5
Unsuccessful steps                         10


Time (in seconds):
Preprocessor                           0.0001


 
Residual evaluation                  0.5452
 
Jacobian evaluation                 45.4726
 
Linear solver                        0.0301
Minimizer                             46.1535


Postprocessor                          0.0000
Total                                 46.1536


Termination:                   NO_CONVERGENCE (Maximum number of iterations reached.)



On Friday, September 2, 2016 at 7:58:48 AM UTC-5, Jorge Romeo wrote:
Can you post your cost functor code?

You can also check that the residuals you are getting, at least in the first iteration, are the same than Matlab's to get a clue about the failure origin...
2016-09-01 15:33 GMT+02:00 Nick Rosetti <nrose...@gmail.com>:
I realized that it is most likely just the way that Ceres computes the cost. I am still getting a high number of unsuccessful iterations though. It is a bit frustrating as I was able to write some Matlab that used Levenburg Marquardt optimization quickly, and I have spent several days porting it to C++ to no avail. 

On Thursday, September 1, 2016 at 8:13:27 AM UTC-5, Nick Rosetti wrote:
I ended up using numeric differentiation and using Mat operations. I simply made the constant matrices members of the struct containing the functor. However, I am now running into an issue where the cost output from the summary is orders of magnitude larger than  what the residual is being set to. For example, on my first set of images the residual is ~2e4 and the cost is 1.67e8. Am I misunderstanding the cost that I am seeing? This is resulting in a high number of unsuccessful iterations. Any thoughts?

--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.

Luis-Jorge Romeo

unread,
Sep 2, 2016, 11:26:51 AM9/2/16
to ceres-...@googlegroups.com
The line:
double res = static_cast<double>(cv::sum(pow(error, 2))[0]);

Does not sound good to me. Ceres already sums and square the residuals internally. Does make sense to use the full vector "error" vector as residuals (dimension n instead of 1) instead of summing and squaring it inside the functor?

Regards,
Jorge Romeo



To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/7a2a19d2-8ab2-44e6-96ea-e483f72d211f%40googlegroups.com.

Nick Rosetti

unread,
Sep 2, 2016, 12:48:48 PM9/2/16
to Ceres Solver
Yeah, I read a couple papers and looked at some of the source code. I realized that line was a mistake. I set up Ceres to use a dynamically sized residual matrix and things seem to be working as expected now. Now I need to figure out efficiency. Though, I know having ~250,000 residuals and 110 parameters is probably going to make things pretty slow. Roughly 10s per iteration. Not sure there is much I can do on that front.
Reply all
Reply to author
Forward
0 new messages