3D Reconstruction

30 views
Skip to first unread message

Jonny March

unread,
Sep 20, 2024, 9:20:08 AM9/20/24
to BoofCV
Hello,

I have several Basler GigE cameras and various lenses, including telecentric, which I'll use pypylon and JEP to get the images into java, which then I'd like to experiment, of which includes 3d point clouds...

I saw these examples:

Calibration:
https://boofcv.org/index.php?title=Example_Calibrate_Planar_Stereo

Stereo 3D:
https://boofcv.org/index.php?title=Example_Stereo_Disparity_3D

Reconstruction with multiple images:
https://boofcv.org/index.php?title=Example_Multiview_Reconstruction_Dense

I've yet to really start to get into this but was wondering if anyone has done similar and the above seems to be for two or one camera though I could throw say 3 or 4 cameras at the problem... how would I set that up in the software? Pairs of two? So 3 cameras would be 3 pairs of stereo cameras in software?

Thanks,
Jonny

Kevin Cain

unread,
Sep 20, 2024, 9:49:01 AM9/20/24
to BoofCV
These BoofCV methods are intended either for calibrated camera pairs or a single uncalibrated camera. If you know the geometry of your stereo camera setup, BoofCV is viable but recent RAFT Stereo approaches may be worth looking into as an alternative. Where BoofCV's 3D reconstruction shines is uncalibrated single cameras -- no initial intrinsics or extrinsics. I'm impressed with latter, especially considering its tractability at runtime. As you likely know, BoofCV's 3D reconstruction departs from familiar MVS approaches (Colmap, MVE, MVS-Net + variants), but works well as long as the delta between your camera views isn't too large for BoofCV's feature tracking. In practice you can use multiple cameras, but the multi-view stereo method was designed for a single camera tracking slowly around a subject. Having many cameras may help with sampling and occlusions but the drawback is that you'll have redundant information and increased processing time.

Jonny March

unread,
Sep 20, 2024, 10:00:37 PM9/20/24
to BoofCV
Interesting, I actually hadn't looked into it too much to be honest.

Redundant information and addition processing isn't necessarily an issue, not for a product development just home.
So then would I need a wide camera to capture the whole objects let's say and smaller ones that could capture finer detail at different angles? Or it'd eventually piece together small images?

I could potentially put the object on a rotating platform at a fixed pace and capture frames at a fixed pace as well which if utilized should be able to increase accuracy.

Would structured, say IR, light help at all? I assume the light would have to be stationary relative to the object with the camera moving.

Thanks for the reply,
Jonny

Reply all
Reply to author
Forward
0 new messages