Calibration, Matching and Bundle Adjustment

893 views
Skip to first unread message

Shaun Berryman

unread,
Aug 3, 2018, 6:15:37 PM8/3/18
to COLMAP
First, thanks for releasing COLMAP and continuing to update it! Sorry for asking multiple questions in the same thread and the verbosity.

I'm using a DJI Phantom 4 and have done many experiments using video with different sample rates (15fps, 5fps, 3fps, 2fps, 1fps) and software such as Altizure and DroneDeploy for path planning. The video paths I used Litchi in waypoint mode.

Calibration and bundle adjustment:

I'm really struggling to get a good camera calibration. I've had the best success using the RADIAL camera model but usually hit the wall with Bundle Adjustment when I approach 900-1000 images registered. It will run with 10-14 global iterations and a single local iteration but suddenly jump to 50 global and 200 local iterations (max) after hitting that magic threshold. Using the OPENCV camera model it never really hits many local iterations but always results in a convex or concave warped sparse model. It is easy to tell as the area I'm testing is very flat. I can't get it to initialize the model using FULL_OPENCV.

Radial parameters:


Matching:

For the video based testing I used sequential matching which produces excellent results but I can't manage to get it to perform loop closure. That is most likely due to my flight plan than anything but still frustrating as I can get ORB_SLAM2 to detect the loop closures. I've had to resort to pulling flight flogs from the bird and then applying the latitude, longitude and altitude to each frame using EXIF and using Spatial matching. This works but there is more noise in the sparse model than there was with sequential matching. I have been using default settings for everything (except min_matches which I've pushed up to 80 from the default of 15 based on spot checks using your feature matching GUI)

Video flight plan:

The flight plan is double helix shaped which means in sequential matching the 1st loop closure won't happen until at least 50% of the way through the flight. The start and end points are at the top of the NADIR image and the first loop closure would be at the bottom intersection of the NADIR image. I would expect at least 12 closures in that path. Without the closures the camera position diverge and I get ghosting like effects. By the way the model below was 8,525 images with ~2.27M points. I'm currently running the stereo process but don't expect it to finish for several more days.

NADIR:



Perspective:

Video: https://www.dropbox.com/s/mfendo581rq8p80/out.mp4?dl=0


Dense sample:

This was a small test using sequential matching at 15fps and only half of the first loop. This is the dense point cloud, I never generated a mesh (~77M points)

If you look closely on the 2nd image and the pool in the middle bottom portion of the image, you can see it even triangulated the entry/exit steps under water!

Shaun Berryman

unread,
Aug 3, 2018, 6:24:57 PM8/3/18
to COLMAP
Looks like I forgot to paste the camera parameters:

Radial:
2312.509857, 1925.164097, 1058.106395, -0.021737, 0.001116

OPENCV:
2371.326493, 2363.007085, 1924.704145, 1054.648013, -0.023632, 0.002066, 0.006711, 0.005696

* The OPENCV parameters above resulted in a convex-ish shape. See image below. Obviously this was done using a different set of images. This was 7 different flights with a video sample rate of 2fps and vocab matching. 


Shaun Berryman

unread,
Aug 6, 2018, 3:33:16 PM8/6/18
to COLMAP
Dense has finished and produced a point cloud with 412,752,080 points (11.1GB size on disk) but unfortunately I don't have a computer with enough RAM to open it 😂

I've tried using COLMAP -> Import From which crashes about 30GB into SWAP usage. MeshLab gets up to about 15% and is in crash/reload loop. The most I have in a single box is 64 GB.

Console shows this on crash:

*** Aborted at 1533583505 (unix time) try "date -d @1533583505" if you are using GNU date ***
PC
: @         0x40d2be85 (/tmp/.gl7My3Cv (deleted)+0x1e84)
*** SIGSEGV (@0x0) received by PID 105667 (TID 0x7f71454e5800) from PID 0; stack trace: ***
   
@     0x7f7140f854b0 (unknown)
   
@         0x40d2be85 (/tmp/.gl7My3Cv (deleted)+0x1e84)
Segmentation fault (core dumped)


Johannes Schönberger

unread,
Aug 7, 2018, 2:28:06 AM8/7/18
to col...@googlegroups.com
You could try and downsample the point cloud using PCL or some other library?
--
You received this message because you are subscribed to the Google Groups "COLMAP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to colmap+un...@googlegroups.com.
To post to this group, send email to col...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/colmap/98dd352a-70fa-44de-a2ae-4d6daee33fa4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Shaun Berryman

unread,
Aug 7, 2018, 12:25:23 PM8/7/18
to COLMAP
I was able to use Entwine and Potree to at least get a glimpse at the dense point cloud. No idea what happened between sparse and dense but the dense looks awful.

Sparse:



Dense:

 

Pierre-Olivier Agliati

unread,
Feb 18, 2020, 3:41:44 AM2/18/20
to COLMAP
Hi Shaun,

I guess one never hopes to be answered 18 months later, but as I've just faced the same problem as you
regarding the jump of the bundler adjuster from a dozens of iteration to 50 (in fact the maximum of iterations allowed
without convergence), I'm sharing with everyone what I've found while looking at the code.
There are two empirical choices performed in the bundle_adjustment.cc regarding the kind of linear solver to be used
by ceres, and 1000 images is one limit beyond which the SPARSE_SCHUR is left aside for the benefit of ITERATIVE_SCHUR.
These strategies are recommended in the ceres FAQ : http://ceres-solver.org/solving_faqs.html
although no precise threshold is given for switching between them.
I've personally changed the limit to 4000, as I've found the SPARSE_SCHUR to perform better, and because I don't intend to
build reconstructions beyond this number of images at the moment.

Cheers,
Pierre-Olivier
Reply all
Reply to author
Forward
0 new messages