I am optimizing quite large sparse optimization problems, which is generally working very well for me. Unfortunately, sometimes, the problems are so large, that the number of nonzeros in block_sparse_matrix.cc overflows "int" (32 bit) and I am receiving CHECK failures like this:
Check failed: num_nonzeros >=0 (-something > 0)
Now, for the time being, the CHECK macro is an issue for me, because it simply aborts the program - and I do not really have a way to predict at what point an optimization problem is "too large". Can you think of an elegant way to either handle this overflow more gracefully without aborting and without patching ceres/glog or a way to detect this case before actually triggering the check failure?
I have already tried to use glog's google::InstallFailureFunction to install a custom failure handler that does not abort but throws instead... But throwing does not work there because it is called from inside a destructor (and to be honest, I am not sure whether it would actually really be a good idea to throw instead of abort in general...)