--
You received this message because you are subscribed to the Google Groups "Theia Vision Library" group.
To unsubscribe from this group and stop receiving emails from it, send an email to theia-vision-lib...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
I understand how detectors and descriptors are used differently. I too have found that using SIFT (Dense) to generate matches in std::vector, then appending that vector with AKAZE (Sparse/Normal) matches does in fact yield more points. But I too am not sure if this is significantly improving the robustness or accuracy.
Agree 100%, i have found SIFT to be the best performer in almost ALL circumstances, provided the affine distortion (transition tilt in ASIFT?) is not excessive. I have read a bit on the SIFT parameters and have struggled some with very high resolution (>4K) imagery in getting features to cover all of intricate areas of interest. Ultimately I found that Akaze worked well for detecting key points on man-made intricate areas, while SIFT works well across areas with less variance such as walls, streets and terrain without complex vegetation.
I've got two matching strategies prototyped that i hope to improve the matching stage. One is using a computed optical flow and the theia::L2 distance to compute 'forward matches' leveraging the optical flow map. Using a bounding box we brute force match the descriptors between the predicate and forward box and only add them to matches vector after a lowes ratio check. This seems to work pretty well but I need to do more testing.
Been looking at using CUDA for feature extraction, but after reading how fast CUDA can operate, I realize I need to spend more time with matching and pose estimation, etc. As you said, don't over think it, just use a GPU and crank out the features & keypoints, right?
I looked at progressive all the way and it looks very interesting. in other work we are heavily investigating using Octrees and can now render hundreds of millions of points on Intel HD 'realtime' (>60fps). There are some custom shaders as well that help with creating a natural look. This is getting me to lean more towards generating very dense point clouds, and less worry about meshing and texturing, but still interesting for exports to other applications that cannot render point clouds well. The octree seems to also lend itself well to compression and change detection!
Last night I started Rick Szeliski's CV book. I'm just getting through the book overview and excited to start chapter 2! Thank you for this recommendation, I'll likely be bouncing back in forth between Khan Academy to sharpen the math skills and this book. I will definitely reach out when I get to the point i can ask more intelligent questions without conflicting the jargon/terms.
Thanks again
Charles