Roger Labbe
unread,Jul 26, 2024, 2:37:24 PM7/26/24Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Ceres Solver
I am solving for up to 12 camera parameters, x,y,z,k1,k2, etc. the values are many orders of magnitude different (k2 can be 1e-16, x can be 1e4), and so scaling everything to roughly the same range makes sense.
However, unscaling the values passed by Ceres in every cost function involves 100s to 1000s (this is a small problem by your standards) of recomputing the same thing over and over, so I try to capture the parameter values using an evaluation callback. This works fine for the doubles, but then seems painful when dealing with jet numbers, which are templated, and kind of pushes the templates throughout the code base. So I use void* to type erase the jet numbers until use:
class ScaledCallbackBase : public ceres::EvaluationCallback {
public:
  virtual ~ScaledCallbackBase() = default;
  virtual void PrepareForEvaluation(bool evaluate_jacobians, bool new_evaluation_point) = 0;
  virtual const double* get_camera_unscaled() const = 0;
  virtual const void* get_jet_camera_unscaled() const = 0; // Use void* for type erasure
};
template <int N>
class ScaledCallback : public ScaledCallbackBase {
public:
  ScaledCallback(double* camera_params, ParameterScaler* scaler)
    : scaler_(scaler)
  {
    camera_scaled_ = camera_params;
  }
  ~ScaledCallback() = default;
  void PrepareForEvaluation(bool /*evaluate_jacobians*/, bool new_evaluation_point) override
  {
    if (new_evaluation_point)
    {
      scaler_->UnscaleCameraParams(camera_unscaled_, camera_scaled_);
      for (int i = 0; i < N; ++i)
      {
        jet_camera_unscaled_[i] = ceres::Jet<double, N + 3>(camera_unscaled_[i], i);
      }
    }
  }
  const double* get_camera_unscaled() const override
  {
    return (const double*)&camera_unscaled_[0];
  }
  const void* get_jet_camera_unscaled() const override
  {
    return reinterpret_cast<const void*>(jet_camera_unscaled_);
  }
public:
  ParameterScaler* scaler_;
  double* camera_scaled_; // scaled to roughly [-1, 1], comes from ceres
  double camera_unscaled_[N];
  ceres::Jet<double, N + 3> jet_camera_unscaled_[N];
};
And then the cost function casts the void pointer back to jet numbers pointer:
template <typename T>
struct DeduceJetDimension;
template <typename T, int N>
struct DeduceJetDimension<ceres::Jet<T, N>> {
  static constexpr int value = N;
};
template <typename T>
bool PointError::operator()(const T* const camera,
              const T* world_pt,
              T* residuals) const
{
  camera;
  T predicted[2];
  if constexpr (std::is_same<T, double>::value)
    EulerWorldToScreen(world_pt, scaled_callback->get_camera_unscaled(), predicted);
  else
  {
    constexpr int N = DeduceJetDimension<T>::value;
    auto jet_camera_unscaled = (ceres::Jet<double, N>*)(scaled_callback->get_jet_camera_unscaled());
    EulerWorldToScreen(world_pt, jet_camera_unscaled, predicted);
  }
  // the error is the difference between the predicted and observed
  residuals[0] = (predicted[0] - T(observed_x_)) * std_dev_reciprocal_;
  residuals[1] = (predicted[1] - T(observed_y_)) * std_dev_reciprocal_;
  return true;
}
So, a few questions: am I roughly doing this right, or is there a concise way to convert to jet numbers in this call (that also isn't expensive), or a better way to do it in the evaluation callback? Or something better I haven't considered?
Second, in my problem I observe the # of parameters is 12, but the jet::N == 15. I don't understand why it is 3 larger; but more importantly, is this always true. If # params is 2, would N==5?
Third, this (scaling) has been on our backlog for awhile, and we finally made time to pursue it because a team was seeing extremely small values in the covariance matrix - 1e-17 to 1e-37 or so. They were solving for k1, which is a very small number (~-1e-8). Switching to this scaled solution has the covariance matrix values seem to more closely match the amount of error in the measurements, which is also true if we hold k1/k2 constant (covariance looks 'reasonable'). To be clear - all the covariance numbers are at that order of magnitude, not just variance on k1 or covariance (k1, something). Having some insight on why this is happening would be useful, and how to intrepret the cov matrix in the presense of scalling (i'd guess it also needs to be unscaled??) I'm not a mathemation, just a user, so if you elect to address this please be gentle!Â