I'm writing a test problem in Ceres of the form y = ax^b + c. If y=ax+c is the linear regression optimal solution, then I am initializing b to 1 and trying to see if there is an exponential value that produces a better fit than a straight line. Vectorized it looks like y = a1x1^b + a2x2^b + a2x2^b...
When b = 1 it is just the linear regression solution. Problem is as soon as I try to do pow(x, b) where b is a parameter I get a nan jacobian even if I make all the b_i equal.
Parameter Block 1, size: 1
1 | -nan
The cost function is simple enough to just show here. a b and c are the parameters. If I set b to constant, the problem converges immediately as expected because the least squares solution is globally optimal.
T costfunc = T(0.0);
for (int i = 0; i < w.cols(); i++ ) {
VectorXd ith_person = w.col(i);
T p = T(0.0);
//y = ax^b + c
for (int j = 0; j < ith_person.size(); j++) {
p += (ceres::pow(ith_person[j], b[0]) * a[j]);
}
costfunc += ceres::pow(ys[i] - (p + c[0]) ,2);
}
residual[0] = costfunc;
return true;
Is optimizing for an exponential parameter not tractable in Ceres or is there a way to make this work? Thanks.