ESmooth(T_Vec3 _xi, T_Vec3 _xj, const Weight& weight)
template <typename U>
bool operator()(const U* Ai, const U* bi, const U* bj, U* residual) const;
--
--
----------------------------------------
Ceres Solver Google Group
http://groups.google.com/group/ceres-solver?hl=en?hl=en
---
You received this message because you are subscribed to the Google Groups "Ceres Solver" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ceres-solver...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
where ai1, ai2, ai3 are the column vectors of Ai and v is the set of vertices/noded
and
where xi are node positions, ci are correspondance points on a surface with normal ni which i´m trying to align wih. alpha_point and alpha_plane are known constants.
And the totalal energy function is E_tot = alpha_fit*E_fit + alpha_ reg*(E_rigid +0.1*E_smooth) and it is not linear. I Am also doing relaxation, starting alpha_fit = 0.1, alpha_reg = 1000 then each the relative change of the cost is below some threshold i divide alpha_reg by 10. I continue this until alpha_reg falls below 0.1. I have taken this scheme from the papers i posted.
Regarding using a costFunction: Honestly i haven't spent any time on trying to differentiate these since i saw there was an autodiff functionality. But if that will speed up the solver? i will most definitely take some time and implement a costFunction instead. If i figure out the differentiation i guess i could make some nice optimization calculating both the residuals and jacobian when i have full control.
Best, Christopher
where xi are node positions, ci are correspondance points on a surface with normal ni which i´m trying to align wih. alpha_point and alpha_plane are known constants.
And the totalal energy function is E_tot = alpha_fit*E_fit + alpha_ reg*(E_rigid +0.1*E_smooth) and it is not linear. I Am also doing relaxation, starting alpha_fit = 0.1, alpha_reg = 1000 then each the relative change of the cost is below some threshold i divide alpha_reg by 10. I continue this until alpha_reg falls below 0.1. I have taken this scheme from the papers i posted.
Regarding using a costFunction: Honestly i haven't spent any time on trying to differentiate these since i saw there was an autodiff functionality. But if that will speed up the solver? i will most definitely take some time and implement a costFunction instead. If i figure out the differentiation i guess i could make some nice optimization calculating both the residuals and jacobian when i have full control.
This is an easy enough term, or rather terms. You should ideally add six new terms per vertex here. Which you should implement via autodiff or analytic differentiation yourself. One thing to be careful of is that the last three terms involve square roots of the squared norm of a_i1 etc. If this gets close to zero, the autodiff will have problems.Also it looks like you are trying to make A into a rotation matrix. Why not actually parameterize your transformation using a rotation matrix itself?
You may want to look at ScaledLoss and LossFunctionWrapper in loss_function.h to help you with annealing the scalars around each of these terms so that you do not have to rebuild the entire problem from scratch every time you change these values.
Generally speaking you should use autodiff for terms, but in some cases like your edge term it maybe better to have analytic differentiation. But before you do all that its worth figuring out if your optimization is spending time in jacobian evaluation at all or not. Summary::FullReport will tell you that. You will need to be on the master branch for this, since the time logging support was added just a week or so ago.
This is an easy enough term, or rather terms. You should ideally add six new terms per vertex here. Which you should implement via autodiff or analytic differentiation yourself. One thing to be careful of is that the last three terms involve square roots of the squared norm of a_i1 etc. If this gets close to zero, the autodiff will have problems.Also it looks like you are trying to make A into a rotation matrix. Why not actually parameterize your transformation using a rotation matrix itself?Yes that is correct. A should be close to a rotation matrix. I´m unsure about the reasons for not modeling Ai as a rotation matrix or perhaps using a quaternion and removing this term completely. Maybe they want Ai to have that freedom of being "close" to a rotation matrix :? I haven't read any paper yet that has raise this question. Then again it wouldn't be that hard to try out and i will probably do this at a later stage."You should ideally add six new terms per vertex here". I have implemented this with 1 autodiff residual block per vertex that spits out 6 residuals. Is there any difference to this approach compared to using 6 residual blocks per vertex for this term.
You may want to look at ScaledLoss and LossFunctionWrapper in loss_function.h to help you with annealing the scalars around each of these terms so that you do not have to rebuild the entire problem from scratch every time you change these values.I have acutally cheated this by passing a reference to a global weight that is evaluated in each functor so i can change it from outside during the optimization (i´m using a callback in which i´m doing the relaxation based on the cost change). However a lossfunction seems better and i will probably go this way.
Everything works now and i getting the result i want. However yesterday i notice that my problem always begin with 8-10 "unsuccessful" iteration before it actually does anything. Is there a way to track down the cause of this.
//Christopher
This is an easy enough term, or rather terms. You should ideally add six new terms per vertex here. Which you should implement via autodiff or analytic differentiation yourself. One thing to be careful of is that the last three terms involve square roots of the squared norm of a_i1 etc. If this gets close to zero, the autodiff will have problems.Also it looks like you are trying to make A into a rotation matrix. Why not actually parameterize your transformation using a rotation matrix itself?Yes that is correct. A should be close to a rotation matrix. I´m unsure about the reasons for not modeling Ai as a rotation matrix or perhaps using a quaternion and removing this term completely. Maybe they want Ai to have that freedom of being "close" to a rotation matrix :? I haven't read any paper yet that has raise this question. Then again it wouldn't be that hard to try out and i will probably do this at a later stage."You should ideally add six new terms per vertex here". I have implemented this with 1 autodiff residual block per vertex that spits out 6 residuals. Is there any difference to this approach compared to using 6 residual blocks per vertex for this term.Since each matrix A_i is a single parameter block, it is better to have a single residual that computes 6 residuals for a single residual block. This will be more efficient, instead of adding six terms with one residual each. basically if your residual block is hiding sparsity between parameter blocks, it should be split into separate residual blocks. e.g. suppose you have the following expression(x1 + x2 + x3 + x4 - y)^2 + x1^2 + x2^2 + x3^2 + x4^2where x1, x2, x3 and x4 are different parameter blocks.one way to implement this would be to create one residual block with five termsx1 + x2 + x3 + x4 - yx1x2x3x4but notice here that the bottom four rows of the residual block are actually very sparse, but if you make them part of the same residual block, ceres can't see the parsity, it treats each row to be dense in the parameter blocks it depends on.in this case, its better to have five different terms, it will save memory as well expose more sparsity to the linear solver.
You may want to look at ScaledLoss and LossFunctionWrapper in loss_function.h to help you with annealing the scalars around each of these terms so that you do not have to rebuild the entire problem from scratch every time you change these values.I have acutally cheated this by passing a reference to a global weight that is evaluated in each functor so i can change it from outside during the optimization (i´m using a callback in which i´m doing the relaxation based on the cost change). However a lossfunction seems better and i will probably go this way.Having a global scalar that you pass is fine. Changing the weight during the optimization is a very bad idea. It changes the objective function during optimization and breaks Ceres' reasoning as to what is progress. Please don't do this. During a Solve call, the mathematical form of your objective function should remain constant.
Everything works now and i getting the result i want. However yesterday i notice that my problem always begin with 8-10 "unsuccessful" iteration before it actually does anything. Is there a way to track down the cause of this.this means that your initial trust region size is too large. Try reducing Solver::Options::initial_trust_region_radius. Are you changing the default value when you start the optimization?
Having a global scalar that you pass is fine. Changing the weight during the optimization is a very bad idea. It changes the objective function during optimization and breaks Ceres' reasoning as to what is progress. Please don't do this. During a Solve call, the mathematical form of your objective function should remain constant.Ok! I guess i have to terminate the solver then change the weight and call solve again? Does this completely reset the solver or do i have to actually create a new problem object and pass it to solve?
Everything works now and i getting the result i want. However yesterday i notice that my problem always begin with 8-10 "unsuccessful" iteration before it actually does anything. Is there a way to track down the cause of this.this means that your initial trust region size is too large. Try reducing Solver::Options::initial_trust_region_radius. Are you changing the default value when you start the optimization?Nop, its on default! But i´ll try and lower it.