Hello all,
I am developing a monocular VIO system, and I am trying to figure out the best way to handle periods where the camera undergoes a pure rotation. I have been using the GenericProjectionFactor factors when I can triangulate 3D points, but I want to still be able to optimize camera poses when it rotates away from these triangulated points. In the pure rotation case, estimating the essential matrix is degenerate. I don't see a way within GTSAM to add a factor representing tracked features with a pure camera rotation, so I was wondering how other folks may have handled this. I was thinking about two possibilities:
1. Make a custom factor that computes the residual from warping a matched feature in one image to another image using a homography based on their orientations only. Then use this factor when detecting a pure rotation situation.
2. Use the GenericProjectionFactor with a fake depth value for features tracked during rotation, and set that depth as constant using NonlinearEquality. Then be careful not to use these features once the camera translates.
Is there another option that I am missing? Any input is appreciated.
Thanks,
Ben