Model fitting in ceres_solver

110 views
Skip to first unread message

Huy Nguyen

unread,
Dec 20, 2016, 11:42:14 PM12/20/16
to Ceres Solver


I'm using ceres-solver to optimize a set of parameters (9 total) for my Monte-Carlo (MC) simulation. Basically, the MC code return an armadillo double matrix of the type results = { {1,2,3,...,19} } for every set of parameters. I have a set of corresponding data. How do I optimize the parameters using ceres-solver?

Here's the code so far:


using ceres::AutoDiffCostFunction;
using ceres::CostFunction;
using ceres::CauchyLoss;
using ceres::Problem;
using ceres::Solver;
using ceres::Solve;

struct SimulationResidual {
SimulationResidual(): y_{346.301,346.312,346.432,346.394,346.471,346.605,346.797,346.948,347.121,347.384,347.626,348.08,348.561,349.333,350.404,351.761,352.975,354.17,354.809} {};
template <typename T> bool operator()(const T* const T_params,
                                      T* residual) const {
    Tm = T_params[0];
    g31= T_params[1];
    g32= T_params[2];
    gg31= T_params[3];
    gg32= T_params[4];
    bn = T_params[5];
    bu = T_params[6];
    mn = T_params[7];
    mu = T_params[8];
    mat Tjumpave = Tjump_ave(); //  MC simulation
    residual[0] =  Tjumpave(0,0) - T(y_[0]);
    residual[1] =  Tjumpave(0,1) - T(y_[1]);
    residual[2] =  Tjumpave(0,2) - T(y_[2]);
    residual[3] =  Tjumpave(0,3) - T(y_[3]);
    residual[4] =  Tjumpave(0,4) - T(y_[4]);
    residual[5] =  Tjumpave(0,5) - T(y_[5]);
    residual[6] =  Tjumpave(0,6) - T(y_[6]);
    residual[7] =  Tjumpave(0,7) - T(y_[7]);
    residual[8] =  Tjumpave(0,8) - T(y_[8]);
    residual[9] =  Tjumpave(0,9) - T(y_[9]);
    residual[10] =  Tjumpave(0,10) - T(y_[10]);
    residual[11] =  Tjumpave(0,11) - T(y_[11]);
    residual[12] =  Tjumpave(0,12) - T(y_[12]);
    residual[13] =  Tjumpave(0,13) - T(y_[13]);
    residual[14] =  Tjumpave(0,14) - T(y_[14]);
    residual[15] =  Tjumpave(0,15) - T(y_[15]);
    residual[16] =  Tjumpave(0,16) - T(y_[16]);
    residual[17] =  Tjumpave(0,17) - T(y_[17]);
    residual[18] =  Tjumpave(0,18) - T(y_[18]);
    return true;
}
private:
    const double y_[19];
};

int main(int argc, char** argv) {
    wall_clock timer;
    timer.tic();
    google::InitGoogleLogging(argv[0]);
    double T_params[] = { Tm,g31,g32,gg31,gg32,bn,bu,mn,mu }; //  initial guess
    Problem problem;
    CostFunction* cost_function =
    new AutoDiffCostFunction<SimulationResidual, 19, 1, 1>
    ( new SimulationResidual() );
    problem.AddResidualBlock(cost_function, new CauchyLoss(0.5), &T_params);
}

When compiled in Xcode with -lceres and -lglog (and I had eigen3 installed and everything. Examples of the package ran fine without errors), I have non-matching member function for call to 'AddResidualBlock' as one of the errors. Please let me know if I need to provide more information to help you help me.

Thanks.


P/S The error is on the problem.AddResidualBlock line.

Sameer Agarwal

unread,
Dec 21, 2016, 6:50:54 AM12/21/16
to Ceres Solver
Huy,
Have you tried working through the tutorial? 

I do not quite understand your code. As far as I can tell this code should not compile. 
In any case, 

CostFunction* cost_function =
    new AutoDiffCostFunction<SimulationResidual, 19, 1, 1>
    ( new SimulationResidual() );
is wrong. Since your functor has a 19 dimensional residual but is taking 9 dimensional parameter block as input. It should read

CostFunction* cost_function =
    new AutoDiffCostFunction<SimulationResidual, 19, 9>
    ( new SimulationResidual() );

Besides this, I expect that your montecarlo simulation is non-deterministic, which means that for the same parameters you will get different objective function values every execution, unless you are using some sort of quasi-random numbers.

This is going to be a problem, since ceres depends on its objective function  being deterministic.

Sameer





--
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ceres-solver/e0f6e06f-3338-4794-a1d0-3abe835cb9b0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Huy Nguyen

unread,
Dec 22, 2016, 1:49:35 PM12/22/16
to Ceres Solver
Hi Sameer,

Yes I have worked through some of the examples from the website, namely curve_fitting and robust_curve_fitting. Thank you for pointing out the mistake in my code. My Monte-Carlo is non-deterministic, so I don't have an analytical solution of the fit. I realize in order for the solver to work, a numerical differentiation may have to take place, but I'm not sure how the solver can get the derivative wrt the 9 parameters I'm putting in.

Is there any suggestion to have optimization work for a non-deterministic function?

Thanks,

Huy

Sameer Agarwal

unread,
Dec 22, 2016, 9:49:48 PM12/22/16
to ceres-...@googlegroups.com
Huy,
if all you are doing is calling an external monte carlo routine and comparing matrices, why not just use numeric differentiation to define the whole cost function?
in any case, numeric derivatives (and residual values) are going to be noisy because of montecarlo simulation and that is likely going to cause problems with your optimization.

Sameer


Huy Nguyen

unread,
Dec 26, 2016, 2:10:54 PM12/26/16
to Ceres Solver
Hi,

Sorry for the delay to reply. Thank you for the suggestion. I am writing the code to do optimization by numeric diff. I think I'll average more simulation runs to reduce the noise. Hopefully it will help.

Thank you,

Huy

Sameer Agarwal

unread,
Dec 27, 2016, 11:43:43 PM12/27/16
to Ceres Solver
Huy,
I suggest using quasi-random numbers or pseudo random numbers with a fixed seed. It will bias your simulations but it will bring down the variance to zero.
Sameer


Reply all
Reply to author
Forward
0 new messages