Visual SFM

Skip to first unread message

Samaneh Rabinia

Oct 24, 2019, 12:50:55 PM10/24/19
to VisualSFM
Hi everyone,
I am new in visual SFM and I have few questions. hope you could help me to figure them out.

let's to describe a little about what I am doing: I have a structure with 22 Raspberry Pis. The goal of this structure is to make a 3D model of plant. long story short, I have 22 images from 22 RPis. All RPis are fixed at their position.  

My problem is that vsfm cannot detect enough features from my images. Actually, It creates multiple models and I cannot see all my cameras  , I just see 16 or 18 cameras in vsfm when I press Compute 3D reconstruction toolbar button. To solve it I tried to do calibration for each camera, but my problem is I don't know how I can add the calibration parameters into visual SFM. Should I use one camera's calibration parameter for all of them? what is the difference between set fixed calibration and use shared calibration in VSFM.

I am using vsfm mardy  on my Ubuntu system, I don't have GPU. Could these be the problem?
I installed visual sfm on my win 32 system but it cannot load images, just shows me white rectangles instead of images.  

Thank you so much for your time.

Dave PSB

Oct 24, 2019, 4:26:05 PM10/24/19
to VisualSFM
Sounds like a cool idea.  It has been a while since I used vsfm, but I can think of a few issues...  1) 22 photos is not very many even if you had a nice solid object.  2) Vsfm generally does not like holes or gaps in objects, and I can imagine that plants have lots of holes and gaps, 3) photogrammetry is very computationally expensive - you need lots of speed and lots of memory - I didn’t even know 32 bit systems existed anymore :-). 4) I thought a CUDA capable (nvidia) GPU was required by the second stage of the pipeline - I can remember that part of the software, but think it was something like CMPMVS. Check the system requirements.   Good luck, like I said, using an array of raspberry pi’s to instantly capture an object is very interesting - Maybe just needs a bit more more pi’s and a bit more  muscle behind it.

David Cummins

Oct 26, 2019, 5:32:41 AM10/26/19
to VisualSFM
I'm not an expert myself, but I do have some thoughts:
  • SFM often has trouble with large flat surfaces, as there aren't many feature points. I assume your plant may have a lot of these.
  • How much crossover is there between cameras? I believe you need a minimum of three photos to capture a point correctly?
  • As daft as it may sound, is there any ability to add features to your plant, e.g. paint something feature-rich like a QR code on a wall so it can't be mixed up with another wall?
  • I'm no expert, but if you can manually calibrate your fixed cameras that does sound ideal.
  • Could you supplement your 22 photos with an initial set of extra photos to help you figure out the correct calibration, then re-use it going forward?
  • What exactly are you hoping to gain from this system? I wouldn't imagine a plant would change much over time?? Is it for security, documentation, something else?
  • Depending upon your goals, e.g. whether you expect geometry to change or only the surface textures, you might use supplementary images ongoing for reconstruction, but only use the 22 "fresh" images to determine pixel colours in the mesh reconstruction.
Hope that gives you something to think over!

Brandon H.

Nov 7, 2019, 12:21:06 PM11/7/19
to VisualSFM
Can you share a photo?  What are the camera placements?

* Note: With such dynamic features, the changes in angle can result in not-enough feature matches. You need multiple cameras to "overlap" in their features for them to be accepted. (There's a toolbar button to disable the 3-minimum requirement, giving horrible results on a plant probably.)

1. For 360, you will likely need more photos. You could place the plant on a turntable, and use a plain backdrop to reduce interfering features, then do the shoot multiple times.  This can be a problem with a shaking plant.  If your fixed-position RPI's could be put on a turning or even unstable setup, it might be another option.

Depending on the plant, many will have many similar features shared (leaves) across angles, and this can cause great confusion in the feature matching process (and low quality results).

2. Another thing you can try is to take the array of photos all from one direction, which will produce a "one-sided" model with much more subtle changes between each viewpoint.  Then do the same from another side.  The small change in locations might allow a high quality (one-sided sort-of) reproduction which can then be matched up to one taken from one or more other side(s) (most-likely with manual effort and cleanup).

Reply all
Reply to author
0 new messages