Hi everyone, I have a stable code I've been using for a while which runs visual slam, I'm solving large problems with thousands of frames incrementally, I'm using my own implementation and not ISAM2, so my code is running several hundred batch optimization with LM optimizer during the process.
typical run takes around 40 minutes and <8GB of RAM, up until now i have been using a constant noise model for all projection factors, but since i know the rectification model and parameters of my camera, i have decided to calculate the covariance per point and take the rectification into account to get more accurate solution.
Ever since i'm made this change the process is running much slower and memory is spiking up to 64GB when it reaches my machine limit and is killed. This only happens while in debug mode, if i just run the python script it all works perfectly fine running fast and not using more than 8GB of RAM.
Has anyone encountered such behavior before? I would like to point out that according to expectations this change had improved the accuracy of the SLAM results when compared to GT data. so its something i would want to keep, but it makes development nearly impossible without being able to debug.