bundle_adjust reading Image Support Data from .JSON

7 views
Skip to first unread message

John Pierangelo

unread,
Mar 20, 2026, 7:07:58 PM (6 days ago) Mar 20
to Ames Stereo Pipeline Support
I am curious as to how bundle_adjust acquires and uses camera data from a provided JSON file after generating a Community Sensor Model, specifically where I can find interactions between the JSON and interest point matching with Vision Workbench. 
After some digging through the repos of ASP, VW, and USGS CSM, I noticed the large network of dependencies, so I am getting a little bit lost. If you have suggestions that may point me in the right direction, I would greatly appreciate it!

Best,
John

Oleg Alexandrov

unread,
Mar 20, 2026, 7:11:50 PM (6 days ago) Mar 20
to John Pierangelo, Ames Stereo Pipeline Support
The Community Sensor Model or any other model creates a camera object that can trace rays to and from cameras. Interest point matching is purely pixel-based early on. It uses feature detectors like SIFT, etc. After that, when a feature is found in both images, rays get traced from those features in the cameras to the ground, where they intersect making a trinagulated point. After that, one can do various filtering for outliers, optimize the camera positions, etc. 

--
You received this message because you are subscribed to the Google Groups "Ames Stereo Pipeline Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email to ames-stereo-pipeline...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/ames-stereo-pipeline-support/407f2414-3884-4f56-b046-872cbb2ccc43n%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages