Let me poll the field and see if anyone has a good suggestion. One of the cameras I would like to use for photogrammetry is a Bebop 2 drone that mounts a 180 degree (but cropped) fisheye camera. As far as I can tell, COLMAP makes an excellent sparse point cloud using the “simple_radial_fisheye” camera model. However, the only exporter I can make sense of is for “*.nmv” which says it does not support the fisheye camera model.
So far, the best process I have found is to have COLMAP run through to a complete sparse mesh, then run the “undistort” tool with min_scale set to 0.8 to capture a fairly large amount of the data available from the camera. Next, close and reopen COLMAP and run a new project using the undistorted images and the “simple_radial” camera model, then export as *.nmv. It seems a bit of a waste to throw away all of that data, but it does work and can give some pretty good results. At the moment I am using OpenMVS for dense reconstruction, triangulation, and texturing. I believe it can support fisheye lenses, but I have not found a way to test that yet because *.nmv is the only format I have found that COLMAP will export that OpenMVS will import. I am open to other tools if needed.
Hi,
What is the motiviation to use OpenMVS? In my experience, the dense point cloud of COLMAP should be significantly better than what OpenMVS produces. The meshing and texture mapping is better in OpenMVS, so you can just use OpenMVS after the dense stereo stage in COLMAP.
Cheers,
Johannes
--
You received this message because you are subscribed to the Google Groups "COLMAP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to colmap+un...@googlegroups.com.
To post to this group, send email to col...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/colmap/c399033e-8d5f-47fb-aef9-b11b76ca5bff%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
I will give it a try, since the dense reconstruction is CUDA only, I had not looked at it. I only have access to one nVidia card, and that's an old K2200. However, I won't be able to test that path until Monday.
To answer the motivation question, in general I try to avoid vendor-lock, and CUDA is nVidia only. The other motivation is the number of machines I have access to with 64 or 128GB of ram, but no video card at all. Slow processing is not a problem for me, but the desired projects can be huge. I will let everyone know how it goes.
Hi,
Have you looked at https://colmap.github.io/faq.html#speedup-dense-reconstruction ?
Cheers,
Johannes
From: 'Peter Falkingham' via COLMAP <col...@googlegroups.com>
Reply-To: <col...@googlegroups.com>
Date: Friday, April 20, 2018 at 1:59 PM
To: COLMAP <col...@googlegroups.com>
--
You received this message because you are subscribed to the Google Groups "COLMAP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
To post to this group, send email to
col...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/colmap/f1fa9f5d-360c-4f2b-810a-19465a2485cb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Thank you both for your work. I will follow up on the results next week.
Glad it's been useful to you (but all credit goes to Johannes for making COLMAP).Johannes: thanks. I've played with those settings before and forgot about them. I feel my attempts to speed up dense reconstruction generally reduced quality too much, but I'll give it another go.
--
You received this message because you are subscribed to the Google Groups "COLMAP" group.
To unsubscribe from this group and stop receiving emails from it, send an email to colmap+un...@googlegroups.com.
To post to this group, send email to col...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/colmap/43bb2c9c-8edc-4b59-bec7-94d40d974672%40googlegroups.com.
To report back in: Using the dense reconstruction as-is did not work very well. The output mesh was missing walls and had poor coverage in several places. In looking at it, it seems to be because the default settings on the Undistortion step take the 3320x4096 fisheye image and create a 819x664 image of only the very center of the source image. That's only about 4% of the collected data. It is very difficult to keep the object of interest in the perfect center of the image and it was often only partially in view. Using the extra->undistortion option, I have access to other settings and using a min_scale of 0.80 seems to give usable coverage. The resulting image is 3276x2656 and seems to use about 60% of the source data (math says 64.0%), though it gets a bit blurry at the edges. It might be that all I need is access to the undistortion settings during dense reconstruction the way there is access to stereo, fusion, and meshing settings.
-Gavin