Curve Fitting and ceres::Jet<double, 6> error

1,061 views
Skip to first unread message

Miaoqi Zhu

unread,
Dec 22, 2016, 8:28:30 PM12/22/16
to Ceres Solver
I am writing a tool and I decided to use Ceres Solver to handle minimization.

I followed some basic example such as "Robust Curve Fitting" on the site; however, I wanted to pass the parameters to my own functions to calculate the residues. 

Basically, I need to compute the best set of parameters to minimize the "distance" between two 190x3 matrices. One matrix stays unchanged, the other matrix will be changing based on the adjusted set of parameters after each iteration. Thus, if I did everything inside "bool operator()(const T* const B, T* residual) const { ... }", that would be inefficient, so I am writing my own routines to get some stuff done.

Since most of my routines take float, I am not sure if the jet structure has a method to convert Jet type to double or float? Right now the compilation error is pretty much about "cannot initialize a parameter of type 'const double *' with an lvalue of type 'const ceres::Jet<double, 6> *const'", which I understand by looking at https://ceres-solver.googlesource.com/ceres-solver/+/1.6.0/include/ceres/jet.h

Perhaps "Curve Fitting" here does not meet my need? any advice is much appreciated! 

Thanks a lot!

Sameer Agarwal

unread,
Dec 22, 2016, 9:17:15 PM12/22/16
to Ceres Solver
Miaoqi,
Let me try and rephrase my understanding of the problem first.

You have a matrix A which is fixed. and some parameters p and some method of generating a matrix B from p. and you are interested in minimizing ||A - B(p||^2 with respect to p. 

now the process to generate B from p, requires some calls to library routines that just work on floats.

there are two different issues here.

1. Calling external library routines which work on scalars, which are not jets.
2. Dealing with the fact that the external libraries work on floats and not doubles.

If (1) does not return derivatives, then you should just use numeric differentiation instead of automatic differentiation and all of these problems go away. For numeric differentiation to work well, I do not recommend doing your computations in floats. 

If there are parts where you want to use automatic derivatives, then I can offer you some more suggestions.
Sameer


--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/c319208d-4290-4418-b1bb-6f4ed2d01b80%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Miaoqi Zhu

unread,
Dec 24, 2016, 2:22:57 AM12/24/16
to Ceres Solver
Hi Sameer,

Thanks for your quick reply! 

I tried "NumbericDiffCostFunction". The code compiles and runs. However, the residues seem to be the same (see below):

"

1.67332e+13

1.67332e+13

1.67332e+13

1.67332e+13

1.67332e+13

1.67332e+13

1.67332e+13

1.67332e+13

1.67332e+13

1.67332e+13

1.67332e+13

1.67332e+13

1.67332e+13


Ceres Solver Report: Iterations: 0, Initial cost: 3.979339e+00, Final cost: 3.979339e+00, Termination: CONVERGENCE

"

To give you a quick look at the code, below is the excerpt that defines the cost function, problem, etc.

-----------------------------------------------------------------------------------------------------

        double B_start[6] = {1.0, 0.0, 0.0, 1.0, 0.0, 0.0};

        

        Problem problem;

        

        CostFunction* cost_function =

                new NumericDiffCostFunction<Objfun, CENTRAL, 1, 6>(new Objfun(mtx1, mtx2),

                                                                       TAKE_OWNERSHIP);

        problem.AddResidualBlock(cost_function,

                                 new CauchyLoss(0.5),

                                 B_start);

        

        Solver::Options options;

        options.linear_solver_type = ceres::DENSE_QR;

        options.minimizer_progress_to_stdout = true;

        Solver::Summary summary;

        Solve(options, &problem, &summary);

        std::cout << summary.BriefReport() << "\n";


-----------------------------------------------------------------------------------------------------

Then in the class of "objectfun", I have:

-----------------------------------------------------------------------------------------------------

bool operator()(const double* const B,

                double* residual) const {

                

         residual[0] = findDistance(_mtx1, _mtx2, B);

                

         return true;

}


-----------------------------------------------------------------------------------------------------

"findDistance" is the routine that does a lot of computation from the two matrices and the array B. Other routines will be called as well. 

From the output, it does not seem like members in B has changed during the minimization process. 

I will change other routines to return double so your second concern is addressed.

Thanks and have a great day!

Miaoqi


Sameer Agarwal

unread,
Dec 24, 2016, 3:51:10 AM12/24/16
to Ceres Solver
Miaoqi,
The solver only ran for one iteration so something is not right. Can you print out the output of summary:FullReport() ?

--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.

Miaoqi Zhu

unread,
Dec 24, 2016, 6:03:54 PM12/24/16
to ceres-...@googlegroups.com
Hi Sameer,

Thanks for the quick reply!

Below is the full report:

----------------------------------------------------------------------

Solver Summary (v 1.11.0-eigen-(3.2.9)-lapack-suitesparse-(4.5.3)-cxsparse-(3.1.9)-no_openmp)


                                     Original                  Reduced

Parameter blocks                            1                        1

Parameters                                  6                        6

Residual blocks                             1                        1

Residual                                    1                        1


Minimizer                        TRUST_REGION


Dense linear algebra library            EIGEN

Trust region strategy     LEVENBERG_MARQUARDT


                                        Given                     Used

Linear solver                        DENSE_QR                 DENSE_QR

Threads                                     1                        1

Linear solver threads                       1                        1


Cost:

Initial                          3.979339e+00

Final                            3.979339e+00

Change                           0.000000e+00


Minimizer iterations                        0

Successful steps                            0

Unsuccessful steps                          0


Time (in seconds):

Preprocessor                           0.0001


  Residual evaluation                  0.0000

  Jacobian evaluation                  0.0084

  Linear solver                        0.0000

Minimizer                              0.0084


Postprocessor                          0.0000

Total                                  0.0085


Termination:                      CONVERGENCE (Gradient tolerance reached. Gradient max norm: 0.000000e+00 <= 1.000000e-10)

----------------------------------------------------------------------

Thanks and Merry Christmas!

Miaoqi


To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver+unsubscribe@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Ceres Solver" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ceres-solver/qasIhdM2nQg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ceres-solver+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/CABqdRUDSM5%2BT8XBXDtPF5BdxHpHetpwHYMpDYynA-ou5kPmgDw%40mail.gmail.com.

Sameer Agarwal

unread,
Dec 24, 2016, 7:25:07 PM12/24/16
to ceres-...@googlegroups.com
Are your underlying routines using floats for computations?
if so, then the perturbations that the solver is making to compute the derivatives are not being seen by the library routines and you are seeing a derivative of zero, and the solver things you are done.
Sameer


To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Ceres Solver" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ceres-solver/qasIhdM2nQg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ceres-solver...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/CACWOBK6J8AAEFuii%3DP6DZH6hZw_25QYbW3rCsj3cnRSN2Cg7hw%40mail.gmail.com.

Miaoqi Zhu

unread,
Jan 16, 2017, 9:09:47 PM1/16/17
to ceres-...@googlegroups.com
Hi Sameer,

Thanks for the help last time!

I changed the return type of the routines to "Double". Things seem to be running well.

Two questions: 

1. how is "ceres-solver" different from Matlab's "lsqnonlin"? I used the same input and user-defined evaluation functions, the results are quite different.

2. Can I specify the maximum number of iterations? or the evaluation model (e.g., Jacobian), or tolerance value to stop iteration?

Below is the full report;

"

iter      cost      cost_change  |gradient|   |step|    tr_ratio  tr_radius  ls_iter  iter_time  total_time

   0  2.933624e+00    0.00e+00    1.43e+00   0.00e+00   0.00e+00  1.00e+04        0    7.11e-03    7.20e-03

   1  2.933624e+00   -8.95e-01    0.00e+00   7.80e-01  -7.16e+00  5.00e+03        1    6.65e-04    7.89e-03

   2  2.933624e+00   -8.95e-01    0.00e+00   7.80e-01  -7.16e+00  1.25e+03        1    6.01e-04    8.50e-03

   3  2.933624e+00   -8.95e-01    0.00e+00   7.80e-01  -7.16e+00  1.56e+02        1    6.00e-04    9.11e-03

   4  2.933624e+00   -8.94e-01    0.00e+00   7.79e-01  -7.15e+00  9.77e+00        1    5.97e-04    9.71e-03

   5  2.933624e+00   -8.83e-01    0.00e+00   7.67e-01  -7.07e+00  3.05e-01        1    5.98e-04    1.03e-02

   6  2.933624e+00   -6.07e-01    0.00e+00   5.04e-01  -5.55e+00  4.77e-03        1    5.97e-04    1.09e-02

   7  2.930928e+00    2.70e-03    1.46e+00   2.17e-02   3.93e-01  4.72e-03        1    7.74e-03    1.87e-02

   8  2.925629e+00    5.30e-03    1.28e+00   1.04e-02   7.80e-01  5.73e-03        1    7.89e-03    2.66e-02

   9  2.923932e+00    1.70e-03    1.51e+00   1.61e-02   2.08e-01  4.77e-03        1    9.14e-03    3.57e-02

  10  2.918368e+00    5.56e-03    1.48e+00   8.09e-03   8.11e-01  6.28e-03        1    7.89e-03    4.36e-02

  11  2.918368e+00   -4.77e-02    0.00e+00   7.35e-02  -5.35e+00  3.14e-03        1    6.03e-04    4.43e-02

  12  2.918368e+00   -1.11e-02    0.00e+00   3.74e-02  -2.43e+00  7.85e-04        1    5.99e-04    4.49e-02

  13  2.918188e+00    1.79e-04    1.40e+00   9.49e-03   1.53e-01  5.89e-04        1    7.65e-03    5.25e-02

  14  2.917306e+00    8.82e-04    1.45e+00   2.47e-03   1.00e+00  1.77e-03        1    7.63e-03    6.02e-02

  15  2.917306e+00   -3.96e-01    0.00e+00   1.71e-01  -1.80e+02  8.83e-04        1    5.99e-04    6.08e-02

  16  2.917306e+00   -1.14e-01    0.00e+00   8.59e-02  -1.02e+02  2.21e-04        1    5.98e-04    6.14e-02

  17  2.917306e+00   -6.09e-03    0.00e+00   2.16e-02  -2.18e+01  2.76e-05        1    6.01e-04    6.20e-02

  18  2.917306e+00   -6.32e-05    0.00e+00   2.70e-03  -1.81e+00  1.72e-06        1    5.97e-04    6.26e-02


Solver Summary (v 1.11.0-eigen-(3.2.9)-lapack-suitesparse-(4.5.3)-cxsparse-(3.1.9)-no_openmp)


                                     Original                  Reduced

Parameter blocks                            1                        1

Parameters                                  6                        6

Residual blocks                             1                        1

Residual                                    1                        1


Minimizer                        TRUST_REGION


Dense linear algebra library            EIGEN

Trust region strategy     LEVENBERG_MARQUARDT


                                        Given                     Used

Linear solver                        DENSE_QR                 DENSE_QR

Threads                                     1                        1

Linear solver threads                       1                        1


Cost:

Initial                          2.933624e+00

Final                            2.917306e+00

Change                           1.631806e-02


Minimizer iterations                       18

Successful steps                            6

Unsuccessful steps                         12


Time (in seconds):

Preprocessor                           0.0001


  Residual evaluation                  0.0113

  Jacobian evaluation                  0.0514

  Linear solver                        0.0001

Minimizer                              0.0631


Postprocessor                          0.0000

Total                                  0.0632


Termination:                      CONVERGENCE (Function tolerance reached. |cost_change|/cost: 5.499025e-07 <= 1.000000e-06)

"

Thanks,

Miaoqi


To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver+unsubscribe@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Ceres Solver" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ceres-solver/qasIhdM2nQg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ceres-solver+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver+unsubscribe@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Ceres Solver" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ceres-solver/qasIhdM2nQg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ceres-solver+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/CABqdRUDfZwODvB4Jg9z0ovuGaiQ_dfM%2BYpfX6Xe_31e6H6k4Lg%40mail.gmail.com.

Sameer Agarwal

unread,
Jan 17, 2017, 12:39:39 AM1/17/17
to ceres-...@googlegroups.com
Miaoqi,

Two questions: 

1. how is "ceres-solver" different from Matlab's "lsqnonlin"? I used the same input and user-defined evaluation functions, the results are quite different.

It is a different implementation of levenberg marquardt algorithm. Differences in their performance are expected. 


2. Can I specify the maximum number of iterations? or the evaluation model (e.g., Jacobian), or tolerance value to stop iteration?

Yes please have a look at the various setting sin ceres::Solver::Options object.


Sameer

 
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Ceres Solver" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ceres-solver/qasIhdM2nQg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ceres-solver...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Ceres Solver" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ceres-solver/qasIhdM2nQg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ceres-solver...@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/CACWOBK5dPNmeHD%2B57oVoumALZtkkXMPeBMQr-Jypz36VOYXB%3DQ%40mail.gmail.com.

Miaoqi Zhu

unread,
Jan 20, 2017, 9:35:11 PM1/20/17
to ceres-...@googlegroups.com
Hi Sameer,

Thanks for the suggestion last time!

One question about dependency of "ceres-solver" on "eigen":

Assuming that I have brewed "eigen", "glog" and "suite-sparse". If I do "brew install ceres-solver --HEAD" or "brew install ceres-solver", when I install my own program by cmake, it will complain about the mismatch between ceres-solver's default eigen's version (3.8.8 or 3.8.9?); on the other hand, if I install "ceres-solver" by downloading the latest package first and cmake/make, there is no such an error when it comes to compiling my program.

Any advice is certainly appreciated!

Regards,

Miaoqi

Miaoqi,
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver+unsubscribe@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Ceres Solver" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ceres-solver/qasIhdM2nQg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ceres-solver+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver+unsubscribe@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Ceres Solver" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ceres-solver/qasIhdM2nQg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ceres-solver+unsubscribe@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver+unsubscribe@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Ceres Solver" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ceres-solver/qasIhdM2nQg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ceres-solver+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/CABqdRUBJvUqy%2BoqgQRbdb82zawe4mOmGC6z64NBhRfthGkVboQ%40mail.gmail.com.

Sameer Agarwal

unread,
Jan 20, 2017, 10:56:08 PM1/20/17
to ceres-...@googlegroups.com
Miaoqi,
I believe the binary that ships with homebrew is compiled with an older version of eigen than the version of eigen that ships with homebrew (yes this happens), this is why ceres complains.

if you do brew install ceres-solver --HEAD that should compile ceres from source and should not have any of these problems. Unless for some reason you have multiple versions of eigen installed..

Sameer

<p class="gmail-

Miaoqi Zhu

unread,
Jan 30, 2017, 9:34:17 PM1/30/17
to ceres-...@googlegroups.com
Hi Sameer,

Thanks for the response last time!

I have finished the main part of my program which has Ceres::Solve as the core library to solve non-linear minimization. However, the resulting 3x3 matrix to fit two curves (i.e., input) are different from the results from other software packages. 

---------------------------------------------------------------------

Software 1:

0.693995  0.235209 0.0707957

0.0578681 1.06262  -0.120487

0.0303731 -0.13349 1.10312


Software 2:

0.7147    0.2166    0.0687

0.0613    1.0690   -0.1303

0.0391   -0.1520    1.1129


Ceres-solver (LINEAR_SEARCH):

0.785592 0.123477 0.090931 

0.161999 1.023926 -0.185925 

0.054799 -0.142472 1.087673


Ceres-solver (TRUST_REGION):

0.991713 0.003293 0.004995 

0.004799 1.010738 -0.015537 

0.005578 -0.006325 1.000747

---------------------------------------------------------------------



The input values are the same. You can find that the first two are similar; the ones from Ceres-solver (LINEAR_SEARCH is closer) are statistically different from the first two matrices. 


Going back to Ceres::Solver, below are the options that I used:

--------------------------------------------------------------------

options.linear_solver_type = ceres::DENSE_QR;

options.minimizer_progress_to_stdout = false;

options.parameter_tolerance = 1e-17;

options.gradient_tolerance = 1e-17;

options.function_tolerance = 1e-17;

options.max_num_iterations = 100;

options.minimizer_type = LINE_SEARCH;

--------------------------------------------------------------------



Below is the full report (please see the warning):

--------------------------------------------------------------------

WARNING: Logging before InitGoogleLogging() is written to STDERR

W0131 02:27:41.086499 3368346560 line_search.cc:772] Line search failed: Wolfe zoom phase failed to find a point satisfying strong Wolfe conditions within specified max_num_iterations: 20, (num iterations taken for bracketing: 1).


Solver Summary (v 1.11.0-eigen-(3.2.9)-lapack-suitesparse-(4.5.3)-cxsparse-(3.1.9)-no_openmp)


                                     Original                  Reduced

Parameter blocks                            1                        1

Parameters                                  6                        6

Residual blocks                             1                        1

Residual                                    1                        1


Minimizer                         LINE_SEARCH

Line search direction              LBFGS (20)

Line search type                  CUBIC WOLFE


                                        Given                     Used

Threads                                     1                        1


Cost:

Initial                          2.933624e+00

Final                            2.145195e+00

Change                           7.884290e-01


Minimizer iterations                       24


Time (in seconds):

Preprocessor                           0.0099


  Residual evaluation                  0.0000

    Line search cost evaluation        0.0000

  Jacobian evaluation                  0.7445

    Line search gradient evaluation    0.5244

  Line search polynomial minimization  0.0009

Minimizer                              0.7482


Postprocessor                          0.0000

Total                                  0.7582


Termination:                      CONVERGENCE (Parameter tolerance reached. Relative step_norm: 0.000000e+00 <= 1.000000e-17.)

--------------------------------------------------------------------


Any suggestion is appreciated!


Miaoqi


--
You received this message because you are subscribed to a topic in the Google Groups "Ceres Solver" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ceres-solver/qasIhdM2nQg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ceres-solver+unsubscribe@googlegroups.com.

Sameer Agarwal

unread,
Jan 30, 2017, 10:31:57 PM1/30/17
to ceres-...@googlegroups.com
something is not quite right in your formulation. 
you have a one dimensional residual?

<p class="gmail-m_4207801860427233401m_2196912409207102399gmail-m_-694494835841506484m_-2486113374190896943gmail-m_-1293162012962535022m_-9173210972260113196gmail-p1 gmail-m_4207801860427233401m_2196912409207102399gmail-m_-694494835841506484m_-2486113374190896943gmail-m_-1293162012962535022gmail_msg gmail-m_4207801860427233401m_21969124

Miaoqi Zhu

unread,
Jan 31, 2017, 1:57:22 PM1/31/17
to ceres-...@googlegroups.com
Hi Sameer,


Yes, I have a one-dimensional residual:

----------------------------------------------------------------------------------------------------------------------

 bool operator( )( const double* const B, double* residual) const

 {

                residual[0] = distance(RGB, XYZtoLAB(XYZ), B);

                return true

  }

----------------------------------------------------------------------------------------------------------------------

If I do not use one-dimensional residual, there will be a compilation error such as:

----------------------------------------------------------------------------------------------------------------------

/usr/local/include/ceres/internal/numeric_diff.h:65:68: note: in instantiation

      of member function 'ceres::internal::VariadicEvaluate<rta::Objfun, double,

      6, 0, 0, 0, 0, 0, 0, 0, 0, 0>::Call' requested here

                          N0, N1, N2, N3, N4, N5, N6, N7, N8, N9>::Call(

                                                                   ^

/usr/local/include/ceres/numeric_diff_cost_function.h:250:20: note: in

      instantiation of function template specialization

      'ceres::internal::EvaluateImpl<rta::Objfun, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0>'

      requested here

    if (!internal::EvaluateImpl<CostFunctor,

                   ^

/usr/local/include/ceres/numeric_diff_cost_function.h:193:3: note: in

      instantiation of member function

      'ceres::NumericDiffCostFunction<rta::Objfun,

      ceres::NumericDiffMethodType::CENTRAL, 1, 6, 0, 0, 0, 0, 0, 0, 0, 0,

      0>::Evaluate' requested here

  NumericDiffCostFunction(

  ^

/Users/miaoqizhu/Source/rawtoaces_IDT/lib/rta.cpp:872:21: note: in instantiation

      of member function 'ceres::NumericDiffCostFunction<rta::Objfun,

      ceres::NumericDiffMethodType::CENTRAL, 1, 6, 0, 0, 0, 0, 0, 0, 0, 0,

      0>::NumericDiffCostFunction' requested here

                new NumericDiffCostFunction<Objfun, CENTRAL, 1, 6>(new O...

                    ^

/Users/miaoqizhu/Source/rawtoaces_IDT/lib/rta.h:292:18: note: candidate function

      not viable: no known conversion from 'double *' to 'double' for 2nd

      argument; dereference the argument with *

            bool operator()(const double* const B,

----------------------------------------------------------------------------------------------------------------------


The routine of "distance(...)" calculates Delta E (CIE 1976) and give it to residual(s). 


The mean and max residues during the last iteration are:

Max: 11.872147,  Mean: 3.127676


Just for comparison, another math software has:

Max: 9.8977,  Mean: 2.62073


Thanks a lot, 


Miaoqi



--
You received this message because you are subscribed to a topic in the Google Groups "Ceres Solver" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ceres-solver/qasIhdM2nQg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ceres-solver+unsubscribe@googlegroups.com.

Sameer Agarwal

unread,
Jan 31, 2017, 1:58:20 PM1/31/17
to ceres-...@googlegroups.com
you have exactly one data point?


<span class="gmail-m_5907150343361746964m_-8412558515063367664gmail-m_4207801860427233401m_2196912409207102399gmail-m_-694494835841506484m_-2486113374190896943gmail-s1 gmail-m_5907150343361746964m_-8412558515063367664gmail-m_4207801860427233401m_2196912409207102399gmail-m_-694494835841506484gmail_msg gmail-m_5907150343361746964m_-8412558515063367664

Miaoqi Zhu

unread,
Jan 31, 2017, 2:05:42 PM1/31/17
to ceres-...@googlegroups.com
Hi Sameer,

Not sure what you mean by saying "one data point". There should be 190 observations, each of which contains 3 values. 

For example, the "RGB" is a 190 by 3 matrix. It is the same for "XYZ". 

The the outcome (updated "B") should explain the best curve fit between two sets of transformed observations based on "RGB" and "XYZ".

Thanks,

Miaoqi


--
You received this message because you are subscribed to a topic in the Google Groups "Ceres Solver" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ceres-solver/qasIhdM2nQg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ceres-solver+unsubscribe@googlegroups.com.

Sameer Agarwal

unread,
Jan 31, 2017, 2:10:41 PM1/31/17
to ceres-...@googlegroups.com
Miaoqi,
I think you are setting up the problem wrong.
There should be one residual per data point, rather than one residual for 190 points. 
ceres will then square the residual and add it. it needs access to the 190 row jacobian.

It seems you are combining the distance of all the 190 points into a single scalar inside your cost function.
Sameer

Miaoqi Zhu

unread,
Jan 31, 2017, 3:45:21 PM1/31/17
to ceres-...@googlegroups.com
Hi Sameer,

In that case, should I change the cost function to something like:

--------------------------------------------------------------------------------------------

FORI(190) {

       residual[0] = 0.0;

        FORJ(3) residual[0] += std::pow((target[i][j] - ref[i][j]), 2.0);

        residual[0] = std::pow(residual[0], 1.0/4.0);

}

--------------------------------------------------------------------------------------------

So it will sum up the difference at each observation and there are 190 observation in total. I have to square the difference for each component of each observation and add them together, as I need to follow how the Delta E is calculated. 

However, the resulting matrix is even more different. 

BTW, should it be "residual[0]" or "residual[i]"? I thought it is an array of residuals, each of which correspond to one data point?

Do you have a good example that I can refer to for multiple residuals? 

Below is the full report:

--------------------------------------------------------------------------------------------

WARNING: Logging before InitGoogleLogging() is written to STDERR

W0131 20:43:20.208451 2910708672 line_search.cc:772] Line search failed: Wolfe zoom phase failed to find a point satisfying strong Wolfe conditions within specified max_num_iterations: 20, (num iterations taken for bracketing: 1).


Solver Summary (v 1.11.0-eigen-(3.2.9)-lapack-suitesparse-(4.5.3)-cxsparse-(3.1.9)-no_openmp)


                                     Original                  Reduced

Parameter blocks                            1                        1

Parameters                                  6                        6

Residual blocks                             1                        1

Residual                                    1                        1


Minimizer                         LINE_SEARCH

Line search direction              LBFGS (20)

Line search type                  CUBIC WOLFE


                                        Given                     Used

Threads                                     1                        1


Cost:

Initial                          5.174013e-01

Final                            3.883969e-01

Change                           1.290044e-01


Minimizer iterations                        3


Time (in seconds):

Preprocessor                           0.0001


  Residual evaluation                  0.0000

    Line search cost evaluation        0.0000

  Jacobian evaluation                  0.2542

    Line search gradient evaluation    0.2215

  Line search polynomial minimization  0.0003

Minimizer                              0.2549


Postprocessor                          0.0000

Total                                  0.2549


Termination:                      CONVERGENCE (Parameter tolerance reached. Relative step_norm: 0.000000e+00 <= 1.000000e-17.)

--------------------------------------------------------------------------------------------


Thanks,

Miaoqi


--
You received this message because you are subscribed to a topic in the Google Groups "Ceres Solver" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ceres-solver/qasIhdM2nQg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ceres-solver+unsubscribe@googlegroups.com.

Sameer Agarwal

unread,
Jan 31, 2017, 3:59:07 PM1/31/17
to ceres-...@googlegroups.com
you should create one costfunction/residual block for each observation. There should be no loops inside your cost function.

basically for every row in your matrix, you should have a single cost function.

also why are you squaring and adding the norm, it is better to expose the 3 dimensional residual to ceres and it will automatically square and add them into the final residual so in fact your cost function should have a three dimensional residual

where

for (j = 0; j < 3; ++j) {
residual[j] = target[j] - ref[j];
}

where each cost function has just one target and one ref.



Miaoqi Zhu

unread,
Feb 2, 2017, 5:30:50 PM2/2/17
to ceres-...@googlegroups.com
Hi Sameer,

Thanks for your timely response every time! I really appreciate it!

I have spent time in thinking about your suggestion in the context of the problem that I am solving.

In fact, there may be just one observation (one big data point - 190x3 matrix). Specifically, this data point consists of 190 elements. Each of the elements has 3 values. That is why I have two loops inside one cost function. 

If I tried to treat each of the 190 elements as one distinct observation, I basically need to move the 190 loops outside the "costfunction" in the main() and leave the 3 loops inside as what you suggested last time. 

In order to focus on getting it right, I changed to input to a much small data set (10 x 3). When I replicated the approach that you recommended, its result is far away from the one of Matlab's "lsqnonlin(...)". Matlab stopped at the residual level of "1.3539e-30".


My solver options:

-----------------------------------------------------------------------------------------------------------------

        options.linear_solver_type = ceres::DENSE_QR;

        options.minimizer_progress_to_stdout = false;

        options.parameter_tolerance = 1e-17;

        options.gradient_tolerance = 1e-17;

        options.function_tolerance = 1e-17;

-----------------------------------------------------------------------------------------------------------------


You can view the report here:

-----------------------------------------------------------------------------------------------------------------

Solver Summary (v 1.11.0-eigen-(3.2.9)-lapack-suitesparse-(4.5.3)-cxsparse-(3.1.9)-no_openmp)


                                     Original                  Reduced

Parameter blocks                            1                        1

Parameters                                  6                        6

Residual blocks                            10                       10

Residual                                   30                       30


Minimizer                        TRUST_REGION


Dense linear algebra library            EIGEN

Trust region strategy     LEVENBERG_MARQUARDT


                                        Given                     Used

Linear solver                        DENSE_QR                 DENSE_QR

Threads                                     1                        1

Linear solver threads                       1                        1


Cost:

Initial                          5.305099e-06

Final                            5.305063e-06

Change                           3.539728e-11


Minimizer iterations                       10

Successful steps                            1

Unsuccessful steps                          9


Time (in seconds):

Preprocessor                           0.0001


  Residual evaluation                  0.0015

  Jacobian evaluation                  0.0039

  Linear solver                        0.0001

Minimizer                              0.0056


Postprocessor                          0.0000

Total                                  0.0057


Termination:                      CONVERGENCE (Function tolerance reached. |cost_change|/cost: 0.000000e+00 <= 1.000000e-17) 

-----------------------------------------------------------------------------------------------------------------



On Tue, Jan 31, 2017 at 12:58 PM, 'Sameer Agarwal' via Ceres Solver <ceres-...@googlegroups.com> wrote:
you should create one costfunction/residual block for each observation. There should be no loops inside your cost function.

basically for every row in your matrix, you should have a single cost function.

also why are you squaring and adding the norm, it is better to expose the 3 dimensional residual to ceres and it will automatically square and add them into the final residual so in fact your cost function should have a three dimensional residual

where

for (j = 0; j < 3; ++j) {
residual[j] = target[j] - ref[j];
}

where each cost function has just one target and one ref.


 
--
Regards!

Miaoqi Zhu, PhD

Sameer Agarwal

unread,
Feb 2, 2017, 6:58:28 PM2/2/17
to ceres-...@googlegroups.com
Miaoqi,
At this point you will have to tell me more about your mathematical problem before I can tell you more.
Sameer


--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/CACWOBK7kVj-Bb8SuvANRdO9m0H66n8j3Q_R-EPN-DqnUX2yN1Q%40mail.gmail.com.

Miaoqi Zhu

unread,
Feb 2, 2017, 7:19:33 PM2/2/17
to ceres-...@googlegroups.com
Hi Sameer,

Here is the Matlab script for testing purpose. The cost function is highlighted in yellow. Basically,

  • There is an input (10x3) matrix - "in";
  • There is a "ground_truth" (3x3) matrix;
  • There is a starting matrix to be updated "B" from a guessed array "mat_coeff_starting_guess";
  • By updating "B", we want ("in" x "B") closer to ("in" x "ground-truth").

I got all the initial input values to match. I think the key is how to get the "residual" set up according to the way "lsqnonlin" in Matlab sets up. 

-----------------------------------------------------------------------------

function test_regression()


    % Define some random values

    in = [0.0192    0.0186    0.0213;

           0.0896    0.0894    0.0891;

           0.7882    0.7807    0.7835;

           0.1995    0.1995    0.1995;

           0.5940    0.5911    0.5902;

           0.4274    0.4276    0.4263;

           0.3004    0.2990    0.2962;

           0.1960    0.1955    0.1948;

           0.1122    0.1130    0.1129;

           0.0644    0.0654    0.0653];


    % Create a ground truth matrix

    ground_truth = [0.8 0.3 -0.1;  -0.25 0.8 0.45; 0.01  0.19 0.8];

    

    % Determine the output values when multiplied by the ground truth

    out = (ground_truth * in')';


 % Create an objective function

        function y = objfun(in, out, mat_vector)


            B = [mat_vector(1) mat_vector(2) 1-mat_vector(1)-mat_vector(2); ...

                   mat_vector(3) mat_vector(4) 1-mat_vector(3)-mat_vector(4); ...

                   mat_vector(5) mat_vector(6) 1-mat_vector(5)-mat_vector(6)];


            out_calc = (B * in')';


           % y is the element-wise subtraction  matrix

            y = out - out_calc;

        end


    % Define some starting guess

    mat_coeff_starting_guess = [1 0 0 1 0 0];


    % The Anonymous function ... @(variable want to solve

    % for)function(variables function requires)

    anon_fun = @(mat_coeff_vector)objfun(in, out, mat_coeff_vector);


    % Regression options

    options = optimset('Display','iter','TolFun',1e-17,'TolX',1e-17);


    % Go do the work ... pass lsqnonlin the objective function, the starting values, the bounds, and the options

    [mat_coeff_vector_result, res_error] = lsqnonlin(anon_fun, mat_coeff_starting_guess, [], [], options);

-----------------------------------------------------------------------------


Thanks,

Miaoqi


To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver+unsubscribe@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Ceres Solver" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ceres-solver/qasIhdM2nQg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ceres-solver+unsubscribe@googlegroups.com.

Sameer Agarwal

unread,
Feb 5, 2017, 1:27:42 AM2/5/17
to ceres-...@googlegroups.com
Miaoqi,

Assuming that you have in and out as two Eigen matrices that you want to match in the manner you described above, then the following bit of code can be used to construct the cost function that you need.

Notice that this allows you to solve problems with an arbitrary number of data points. It uses a variant of the AutoDiffCostFunction constructor which allows you to specify the number of residuals at construction time rather than as a template parameter.


struct MiaoqiFunctor {
  MiaoqiFunctor(const Eigen::MatrixXd& in, const Eigen::MatrixXd& out)
      : in(in), out(out) {}

  template <typename T>
  bool operator()(const T* m, const T* residuals) {
    T b[9] = {m[0], m[1], 1.0 - m[0] - m[1],
              m[2], m[3], 1.0 - m[2] - m[3],
              m[4], m[5], 1.0 - m[4] - m[5]};
    for (int r = 0; r < in.rows(); ++r) {
      for (int c = 0; c < 3; ++c) {
        residuals[r * 3 + c] = out[r * 3 + c] - (b[3 * c + 0] * in[r * 3] + b[3*c + 1] * in[r * 3 + 1] + b[3*c + 2] * in[r * 3 + 2]);
      }
    }
    return true;
  }

  const Eigen::MatrixXd& in;
  const Eigen::MatrixXd& out;
};

ceres::CostFunction* cost_function = 
ceres::AutoDiffCostFunction<MiaoqiFunctor, ceres::DYNAMIC, 6>(new MiaqiFunctor(in, out), out.rows() * 3);



To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Ceres Solver" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ceres-solver/qasIhdM2nQg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ceres-solver...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/CACWOBK7KEtwKaxbFa%2B4GPKgwaUp7_UdFuXrS_qLzmD_wEi608A%40mail.gmail.com.

Miaoqi Zhu

unread,
Feb 6, 2017, 10:18:16 PM2/6/17
to ceres-...@googlegroups.com
Hi Sameer,

Thanks so much for your help!

I tried your approach and the outcome matrix matched really well based on the simple test. However, a dilemma shows when I tried to adapt it in the more complicated case.

If you can trace back to the first question, I was asking about having other routines that works on scalar, you suggested:

"f (1) does not return derivatives, then you should just use numeric differentiation instead of automatic differentiation and all of these problems go away. "

Now I need to switch back to automatic differentiation; however the old errors come back on Jet structure. 

I will see if I can place the routines (kinda of computing-complicated) somewhere outside so I can bypass this issue. In the meanwhile, if I can do something to mitigate the error of type mismatch with ""automatic differentiation", any suggestions will be appreciated! 

Have a great day!

Miaoqi


On Sat, Feb 4, 2017 at 10:27 PM, 'Sameer Agarwal' via Ceres Solver <ceres-...@googlegroups.com> wrote:
Miaoqi,

Miaoqi Zhu

unread,
Feb 6, 2017, 10:46:21 PM2/6/17
to ceres-...@googlegroups.com
I tried:

CostFunction* cost_function =  new NumericDiffCostFunction<Objfun, CENTRAL, DYNAMIC, 6> (new Objfun(Mt_IN,  Mt_OUT),

                                                                             TAKE_OWNERSHIP,

                                                                             570);

The result still does not match. Then I changed "CENTRAL" to "FORWARD" and "RIDDERS", it is still not improving much.


The thing is that to convert "Mt_IN" to something else, there are two other quite complex routines. It is hard to place them outside, as B (to be improved) is one parameter in one routine. 


Thanks,


Miaoqi

Sameer Agarwal

unread,
Feb 7, 2017, 12:01:38 AM2/7/17
to ceres-...@googlegroups.com
can you show me the structure of Objfun?

--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.

Miaoqi Zhu

unread,
Feb 7, 2017, 4:27:27 PM2/7/17
to ceres-...@googlegroups.com
Hi Sameer,

I think I figured out a way to solve the issue when using "automatic differentiation". After doing the type conversion to T/"jet", the code compiles and runs. However, the result is still something like we had before when using "numeric differentiation".

I created a small testing case: https://github.com/miaoqi/test_regression

The main class of "ObjfunTest" is in "lib/rta.h". If you want to run the code on Mac, you can just do the regular thing:

1). mkdir build && cd build;
2). cmake ..
3). make

In "build", you can type "./regression" to run the test. You may also see how different the resulting matrix is from the lines at the end of "rta.h" .

Thanks so much for your help every time!

Miaoqi

Miaoqi Zhu

unread,
Feb 8, 2017, 5:42:24 PM2/8/17
to ceres-...@googlegroups.com
Hi Sameer,

Just wondering did you get a chance to see the structure of "ObjfunTest" yet?

The outcome matches well when the B is applied directly in the cost function just like what you have suggested. On the other hand, if B is applied by calling an external/intermediate routine, the mismatch began from the first iteration. 

In the end, the number of iterations is different. For example, for the same input and option restraints, Ceres has 16 iterations and MatLab has 38 iterations.

Perhaps I have missed something. Any advice is greatly appreciated!

Thanks a lot,

Miaoqi

Sameer Agarwal

unread,
Feb 8, 2017, 9:18:26 PM2/8/17
to ceres-...@googlegroups.com

Miaoqi,
I have been traveling so I have not had a chance to look at this. I hope to look at it tomorrow.
Sameer


--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.

Sameer Agarwal

unread,
Feb 10, 2017, 2:03:45 AM2/10/17
to ceres-...@googlegroups.com
miaoqi,
I took a quick look at your code. Two things stood out. 

1. It is not clear to me as to why the number 190 is hardcoded in your code. Your code would be considerably simpler if one instance of the cost function just computer the residual for one rgb point. and then you created 190 or however many cost functions for each rgb point and added it to the problem.

2. If you have a single huge residual block the robust loss function you are using becomes useless, because the objective function is monotonic in the one cost function and robustifying it using a cost function does not really matter. I think what you want is a cauchyloss applied to each residual.

Also I would start by not using any robust loss function and see what kind of results you get first. 

so start by simplifying your code to operate on one data point at a time.

Sameer Agarwal

unread,
Feb 10, 2017, 3:22:38 AM2/10/17
to ceres-...@googlegroups.com
also your code is incredibly complicated (and error prone) where you are trying to recreate basic matrix operations yourself. when you can easily use Eigen matrices.  Eigen::Matrix<T> works.
Sameer

Miaoqi Zhu

unread,
Feb 10, 2017, 9:23:19 PM2/10/17
to ceres-...@googlegroups.com
Hi Sameer,

Thanks for the feedback! I appreciate it!

I found out the cause in the math library - a line of code. Now the results match well.

The reason that I wrote own matrix operations is because some places may have old Linux systems that may have been customized. Therefore, we may refer to another library for the regression in the meanwhile. If you are aware of any good libraries, please let me know. I have started something with gsl. 

I will definitely consider using Eigen::Matrix when the prototype version is done. :)

Have a great weekend!

Miaoqi


To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver+unsubscribe@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver+unsubscribe@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Ceres Solver" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ceres-solver/qasIhdM2nQg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ceres-solver+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/CABqdRUCcbGW71B5D6y1iHFMQx_RWDxnEVfPyNR9yEuF1QVH1XA%40mail.gmail.com.

Sameer Agarwal

unread,
Feb 10, 2017, 9:51:24 PM2/10/17
to ceres-...@googlegroups.com

Great. I think if you are going to use Ceres you are going to be depending on eigen anyways. It is a well engineered performant and well supported library.

I strongly recommend using it. Plus it's header only, so no compilation needed.

Sameer


To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.

--
You received this message because you are subscribed to a topic in the Google Groups "Ceres Solver" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ceres-solver/qasIhdM2nQg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ceres-solver...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/CACWOBK6JmzYyCMFiVdoG-C38r-fEW%3DtFE1JoLSB7UtagiynTqA%40mail.gmail.com.
Reply all
Reply to author
Forward
0 new messages