Quality, memory and speed settings in OpenMVS

3,040 views
Skip to first unread message

shaa...@gmail.com

unread,
Jul 6, 2016, 12:21:35 PM7/6/16
to openMVS
First of all, biggest thanx to author for creating and support such beautiful tool like OpenMVS!

I trying to reach best possible mesh reconstruction quality (if possible beat commercial software ;) and have many questions about settings that have openmvs tools.

In which step are most important for quality settings?


DensifyPointCloud.exe
-----------------------------------------
--resolution-level arg (=1)       how many times to scale down the images before point cloud computation

• 1 mean not scale down images? If i do not scale image is this produce more detailed pointcloud/mesh? Or this settings only for pointcloud?
If i have memory error on RefineMesh step, can help scale down on this step?

--min-resolution arg (=640)    do not scale images lower than this resolution

• if resolution-level = 1 this settings not required?


--number-views arg (=4)        number of views used for depth-map estimation (0 - all neighbor views available)

• more or less views how can affect on depth-map estimation? More is better but calculate longer?


--number-views-fuse arg (=3) minimum number of images that agrees with an estimate during fusion in order to consider it inlier

• is this settings something like reprojection error, and mean that if on 3 images "point" have small erorr but on 4th image have bigger error, so this point still accepted.
If less than 3 images have big error this "point" not count? 
For better quality bigger number-views-fuse is better?


--estimate-colors arg (=1)     estimate the colors for the dense point-cloud
• Is this required for next steps if i want produce mesh? Or can be disabled for less memory usage and better speed?

--estimate-normals arg (=0)  estimate the normals for the dense point-cloud 
• Is this required for next steps if i want produce mesh? Or can be disabled for less memory usage and better speed?


ReconstructMesh.exe
-----------------------------------------

Reconstruct options:
-d [ --min-point-distance ] arg (=2)  minimum distance in pixels between the projection of two 3D points to consider them different while triangulating (0 - disabled)

• Or this is Reprojection error? Agisoft "recomend" use 0.5px, ContextCapture probably estimate points up to 0.25-0.15px Reprojection error.
Bigger less quality? Can be fractional (0.5-0.25)?

--constant-weight arg (=1)              considers all view weights 1 instead of the available weight
• Please, can you explain a bit about this setting? What it do? And is this boolean or float?

-f [ --free-space-support ] arg (=0)   exploits the free-space support in order to reconstruct weakly-represented surfaces
• as i understand research papers enabling this can help calculate weak surfaces? So for better result with not good coverage photos better enable?
Or this can lower average quality?


--thickness-factor arg (=2)              multiplier adjusting the minimum thickness considered during visibility weighting
• in which dimension?
If i have 1meter size object about 60% of photo and want calculate thin walls about 5-10mm size how big must be thickness-factor?

--quality-factor arg (=1)                  multiplier adjusting the quality weight considered during graph-cut
• This one also not understand. Bigger is better or less - better? fractional?

Clean options:
--decimate arg (=1)                       decimation factor in range (0..1] to be applied to the reconstructed surface (1 - disabled)
• Is this decimation of final mesh? 3mln mesh decimate by 2 will be 1.5mln mesh?

--remove-spurious arg (=20)           spurious factor for removing faces with too long edges or isolated components (0 - disabled)
• what dimension?

--remove-spikes arg (=1)               flag controlling the removal of spike faces
• boolean?

--close-holes arg (=30)                 try to close small holes in the reconstructed surface (0 - disabled)
• dimension?

--smooth arg (=2)                         number of iterations to smooth the reconstructed surface (0 - disabled)
• is this required because result mesh can have much noise or from good source better not use?


RefineMesh.exe
RefineMeshCUDA.exe
----------------
--max-views arg (=8)           maximum number of neighbor images used to refine the mesh
• more better?

--decimate arg (=0)             decimation factor in range [0..1] to be applied to the input surface before refinement (0 - auto, 1 - disabled)
• In which case this can be needed? If in this step we try to create better mesh with bigger polygon count, how decimation of result from previous step can help?

--ensure-edge-size arg (=1)  ensure edge size and improve vertex valence of the input surface (0 - disabled, 1 - auto, 2 - force)
• disable is lower quality?

--max-face-area arg (=64)    maximum face area projected in any pair of images that is not subdivided (0 - disabled)
• in pixels? Less is better for details but can create more noise?

--scales arg (=3)                 how many iterations to run mesh optimization on multi-scale images
• more steps more refinements and more details quality?

--scale-step arg (=0.5)         image scale factor used at each mesh optimization step
• bigger is better (images less scaled down)?

--reduce-memory arg (=1)    recompute some data in order to reduce memory requirements
• don' t seen difference, app eat all available RAM

--alternate-pair arg (=0)        refine mesh using an image pair alternatively as reference (0 - both, 1 - alternate, 2 - only left, 3 - only right)
• if possible explain in which case this can be needed?

--regularity-weight arg          scalar regularity weight to balance between photo-consistency and regularization terms during mesh optimization
--rigidity-elasticity-ratio arg   scalar ratio used to compute the regularity gradient as a combination of rigidity and elasticity
--gradient-step arg               gradient step to be used instead (0 - auto)
--planar-vertex-ratio arg (=0)  threshold used to remove vertices on planar patches (0 - disabled)
• looks like mathematical magic, but is there any settings that can help with better quality or memory or speed?

--use-cuda arg (=1)              refine mesh using CUDA

on GTX960 with 2Gb ram CUDA version produce much about low memory errors and crashed. 2Gb VRAM is not enough for 12-22Mpx images?

=========


And about memory.

I test OpenMVS on 209 photos 3264x4896px size 

>> DensifyPointCloud.exe --number-views 5 -v 3 --estimate-colors 0 scene.mvs

17:18:39 [App     ] Depth-maps fused and filtered: 209 depth-maps, 591793105 depths, 51249235 points (9%) (8m48s720ms)
17:18:44 [App     ] Dense point-cloud composed of:
        0 points with 1- views
        0 points with 2 views
        51249235 points with 3+ views
17:18:44 [App     ] Densifying point-cloud completed: 51249235 points (3h21m26s153ms)
17:20:33 [App     ] Scene saved (1m48s677ms):
        209 images (209 calibrated)
        51249235 points, 0 vertices, 0 faces
17:20:39 [App     ] Point-cloud saved: 51249235 points (5s574ms)
17:20:47 [App     ] MEMORYINFO: {
17:20:47 [App     ]     PageFaultCount 78060013
17:20:47 [App     ]     PeakWorkingSetSize 24.35GB
17:20:47 [App     ]     WorkingSetSize 7.45GB
17:20:47 [App     ]     QuotaPeakPagedPoolUsage 16.33MB
17:20:47 [App     ]     QuotaPagedPoolUsage 16.33MB
17:20:47 [App     ]     QuotaPeakNonPagedPoolUsage 3.49MB
17:20:47 [App     ]     QuotaNonPagedPoolUsage 3.38MB
17:20:47 [App     ]     PagefileUsage 13.28GB
17:20:47 [App     ]     PeakPagefileUsage 33.87GB
17:20:47 [App     ] } ENDINFO



>>ReconstructMesh.exe -v 3 --smooth 0 -f 1 scene_dense.mvs

17:27:29 [App     ] Build date: May  8 2016, 13:31:13
17:27:29 [App     ] CPU: Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz
17:27:29 [App     ] RAM: 31.88GB Physical Memory 128.00TB Virtual Memory
17:27:29 [App     ] OS: Windows 10+ x64
17:27:29 [App     ] SSE & AVX compatible CPU & OS detected
17:27:29 [App     ] Command line: -v 3 --smooth 0 -f 1 scene_dense.mvs
17:28:24 [App     ] Scene loaded (55s135ms):
        209 images (209 calibrated) with a total of 796.30 MPixels (3.81 MPixels/image)
        51249235 points, 0 vertices, 0 faces
17:39:47 [App     ] Delaunay tetrahedralization completed: 51249235 points -> 33249290 vertices, 219797340 (+276) cells, 439594818 (+414) faces (10m55s69ms)
19:24:00 [App     ]     weighting completed in 1h41m24s764ms
19:50:07 [App     ]     t-edge reinforcement completed in 26m7s25ms
19:50:19 [App     ] Delaunay tetrahedras weighting completed: 219797616 cells, 439595232 faces (2h10m19s39ms)


And after waiting of weighting while tool eat all 32Gb of RAM it crashed.



I really sorry for huge post.

cDc

unread,
Jul 6, 2016, 2:16:58 PM7/6/16
to openMVS, shaa...@gmail.com
Thank you for the interest, and please tell us a bit for what do you plan to use OpenMVS.

You got right most of the parameters, and although I tried to make them as generic as possible, sometimes you have to adjust them depending on the dataset in order to get the best results.

For instance with high resolution images, like yours, you should get the best accuracy/speed for DensifyPointCloud by adding also: --resolution-level 2 (or even 3)

As for ReconstructMesh i'd recommend the default parameters; replace all params you used with -d 6 (or even higher: reduce memory usage, but may loose details)

Often OpenMVS can generate results as good or sometimes even better than commercial software, but it's main drawback is scalability (runs out of memory quite fast). To overcome this you can use its big brother: 3Dnovator

shaa...@gmail.com

unread,
Jul 6, 2016, 8:24:41 PM7/6/16
to openMVS
Thanx for answer! I'll try right now with new settings.

Forgot to ask yesterday.
As I know refine mesh have hidden settings that allow limit area of calculation. Can this help on speed and memory requirements?
And please, can you tell how pass required parameters for this settings? Is this some kind of bounding box with 2-3 corners coordinates defined or anything alse?
I want try this settings, because for this moment I test openMVS on photos from museum with object about 2m high and room about 10-20m long. Too many unneeded data.

And another question about masks.
If Photoscan strongly required masks because used outdated algorithms that main goal is point cloud, and without masks on photos with not perfect DOF produce too much noise.
But as I understood from research papers that used in OpenMVS, may be better not use masks (fill unmixed area with flat color). Because them can hide some data required for weak surface calculation and surface refinement?


About me:
I'm more than 20 years experience graphic/3D/UI/etc designer. Windows/OSX/FreeBSD skilled user. And now photogrammetry enthusiast. Some years ago start living and work in Japan.

My interest in photogrammetry tool is mostly hobby. I mostly interesting in close range scans. Sculptures, or may be small buildings anything that possible scan without drones.

Goal that I want rich in my projects is something about 1-10 mm resolution on objects about 2-10meters (0.01~0.1% of object size). May be I want too much but... :)
I can sacrifice speed for quality. Also I work many ears as FreeBSD admin so command line is not a big problem for me.


About 3Dnovator. I want test it, but looks like it can not be free. :(
Btw, if this your startup, and if you need any help or may be need skilled UI/UX designer in your growing team... ;)

cDc

unread,
Jul 7, 2016, 4:08:02 AM7/7/16
to openMVS, shaa...@gmail.com
Yes, you can refine only the desired part of the scene, by simply editing the mesh and removing any background objects, and by supplying the cut mesh additionally to the normal params: --mesh-file <filename>

Also, in order to reduce memory requirements, you can use again --resolution-level 1 or 2

Hope you get good results! Don't forget to show them to us also :)

I'll keep in mind ;)

shaa...@gmail.com

unread,
Jul 7, 2016, 7:28:54 AM7/7/16
to openMVS, shaa...@gmail.com
Thank you! 

But is there any way limit area in first DensifyPointCloud and ReconstructMesh steps?
Or i should use lowest possible resolution and quality for calculate simple mesh which cleanup from unneeded polygons and use as --mesh-file?


Also looks like enabling free-space-support was not good idea, so result before refinement too ugly. So probably i should not use it on not perfect photos.

Btw,  -d 4 was enough for finish successfully

209 images (209 calibrated) with a total of 796.30 MPixels (3.81 MPixels/image)
51249235 points, 0 vertices, 0 faces 

14:59:36 [App     ] Mesh saved: 3698595 vertices, 7394552 faces (2s261ms)
14:59:36 [App     ] MEMORYINFO: {
14:59:36 [App     ] PageFaultCount 25347868
14:59:36 [App     ] PeakWorkingSetSize 27.51GB
14:59:36 [App     ] WorkingSetSize 333.20MB
14:59:36 [App     ] QuotaPeakPagedPoolUsage 16.32MB
14:59:36 [App     ] QuotaPagedPoolUsage 16.32MB
14:59:36 [App     ] QuotaPeakNonPagedPoolUsage 3.80MB
14:59:36 [App     ] QuotaNonPagedPoolUsage 3.41MB
14:59:36 [App     ] PagefileUsage 511.11MB
14:59:36 [App     ] PeakPagefileUsage 33.39GB
14:59:36 [App     ] } ENDINFO

er...@ishg.co.jp

unread,
Jul 8, 2016, 6:24:11 AM7/8/16
to openMVS, shaa...@gmail.com
Questions, questions, questions...

If i RefineMeshCUDA produce CUDA_ERROR_OUT_OF_MEMORY errors, is this only warnings and tool can catch such issues and use CPU for example, or this is fault and result of refinement will be unpredictable?

cDc

unread,
Jul 8, 2016, 10:15:22 AM7/8/16
to openMVS, shaa...@gmail.com
No, not implemented currently in OpenMVS!
No, the initial mesh ideally should be as close as possible to the real mesh, cause the refinement step is only recovering details, and not big inaccuracies.

cDc

unread,
Jul 8, 2016, 10:16:34 AM7/8/16
to openMVS, er...@ishg.co.jp
That is an error, and there is not automatic fall back to CPU implemented.

shaa...@gmail.com

unread,
Jul 11, 2016, 3:52:46 AM7/11/16
to openMVS, er...@ishg.co.jp
ReconstructMesh.exe -v 3 scene_dense.mvs
16:44:36 [App     ] Build date: May  8 2016, 13:31:13
16:44:36 [App     ] CPU: Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz
16:44:36 [App     ] RAM: 31.88GB Physical Memory 128.00TB Virtual Memory
16:44:36 [App     ] OS: Windows 10+ x64
16:44:36 [App     ] SSE & AVX compatible CPU & OS detected
16:44:36 [App     ] Command line: -v 3 scene_dense.mvs
16:45:19 [App     ] Scene loaded (43s710ms):
        209 images (209 calibrated) with a total of 796.30 MPixels (3.81 MPixels/image)
        36775309 points, 0 vertices, 0 faces

What can happen that Reconstruct Mesh not produce any mesh?
Already tried -d 4 and 6 and more with the same result.

cDc

unread,
Jul 11, 2016, 4:05:49 AM7/11/16
to openMVS, shaa...@gmail.com
If this is the whole log output, maybe again you run out of memory. Try closing all other applications and maybe even bigger -d param.

shaa...@gmail.com

unread,
Jul 11, 2016, 4:41:37 AM7/11/16
to openMVS, shaa...@gmail.com
Strange, already try -d 10240 with the same result. Memory not grow bigger than 4Gb.

May be i can share required *.mvs and source files so if you will have time for check why this happen?

Actually for this moment i try calculate model from masked (unneeded area filled with black color) while unmasked with some tunes produce some mesh.
I just thought may be without unnided area it will be required less memory.

Anyway results still far away from Agisoft Photoscan, while Photoscan far away from Bentley ContextCapture.

I can share all source images, so may be you also can try with openmvs and 3drenovator?

cDc

unread,
Jul 11, 2016, 4:51:37 AM7/11/16
to openMVS, shaa...@gmail.com
Yes, please share the images and any calibration info if you have.

shaa...@gmail.com

unread,
Jul 11, 2016, 5:13:13 AM7/11/16
to openMVS, shaa...@gmail.com
This is current (AthenaCut) dense cloud ply and mvs file


Athena sources
scene.nvm and scene.mvs incl (but nothing special, mostly default settings)

Athena with mask (AthenaCut)  sources
scene.nvm and scene.mvs incl (but nothing special, mostly default settings)


This result from ContextCapture 
Check matlab rendering for mesh quality.

This is from Photoscan 

cDc

unread,
Jul 11, 2016, 5:16:20 AM7/11/16
to openMVS, shaa...@gmail.com
The first link contains an empty folder. Please add the original images there.

shaa...@gmail.com

unread,
Jul 11, 2016, 5:18:09 AM7/11/16
to openMVS, shaa...@gmail.com
Ah, i thought upload already finished. Need about 25 min for finish.

Other folders must be ok.

cDc

unread,
Jul 11, 2016, 6:15:30 AM7/11/16
to openMVS, shaa...@gmail.com
Could you please share the original photos, not manipulated in LightRoom etc? (by the way, this is bad practice when it comes to photogrammetry)

er...@ishg.co.jp

unread,
Jul 11, 2016, 6:31:40 AM7/11/16
to openMVS, shaa...@gmail.com
On Monday, July 11, 2016 at 7:15:30 PM UTC+9, cDc wrote:
Could you please share the original photos, not manipulated in LightRoom etc?

(by the way, this is bad practice when it comes to photogrammetry)
 
Looks liek this is common mistake about bad practice. Bad practice is any post-procession that can affect shape like sharpen or denoise
(but i think light denoise still admissible if you don't mind lost some noise).
As i remember papers about SIFT it not use lighting information except contrast that help find "feature". That why possible use photos taken in different lighting condition.

Also my experiments show better result with slightly lighten/darken images than with untouched but with "lost" shadows.

I can share untouched files, but this will be not correct comparison with my results.
In this case we can compare scan of one model made from iPhone with Canon Mark II camera or Laser scan of the same object.

This is source.
But if possible, please use my "touched" files.

cDc

unread,
Jul 11, 2016, 6:45:54 AM7/11/16
to openMVS, er...@ishg.co.jp
Thanks!
You are completely right, however any post-processing meant to improve the 3D reconstruction process is the responsibility of the 3D reconstruction software and not the user. For instance 3Dnovator automatically adjusts the images (like lighten/darken, etc) as it sees fit.

er...@ishg.co.jp

unread,
Jul 11, 2016, 6:56:45 AM7/11/16
to openMVS, er...@ishg.co.jp


On Monday, July 11, 2016 at 7:45:54 PM UTC+9, cDc wrote:
Thanks!
You are completely right, however any post-processing meant to improve the 3D reconstruction process is the responsibility of the 3D reconstruction software and not the user. For instance 3Dnovator automatically adjusts the images (like lighten/darken, etc) as it sees fit.

Ah, don't thought about such ability. OpenMVS use the same preprocessing?
But i think you should make this optional. Because i don't think this will help with HDR 16bit images too much or may be make results are worster.
Also probably i test with lightened images only with Agisoft Photoscan and this preprocessing was help. But later found that it use outdated algorithms. May be i should retest with ContextCapture again.

Also i seen that DensifyCloud create depth map and edges images and later use them in final depth map calculation. Is it possible (may be in code) choose or fine tune edge calculation algorythm? In my video works sometime common "sobel" or other edge detection algo was not the best, and work better some custom or combined.
Or i'm in wrong direction and edge detection part is have lowest possible affect on final depth map results?

cDc

unread,
Jul 11, 2016, 7:38:01 AM7/11/16
to openMVS, er...@ishg.co.jp
Using HDR is a different story. For instance 3Dnovator does not support HDR images directly, so in this case a smart tone-mapping algorithm should be used before to make sure as much info is retained during JPEG conversion. Do you have HDR images for this dataset?

OpenMVS automatically adjusts images during texturing, as for the other stages is not needed.

I do not understand the last question. I do not use edges in OpenMVS.

er...@ishg.co.jp

unread,
Jul 11, 2016, 7:50:02 AM7/11/16
to openMVS, er...@ishg.co.jp

I do not understand the last question. I do not use edges in OpenMVS.

DensifyPointCloud with -v 3 settings produce some more grayscale depth map pngs. With pair that looks like find edges from source image. Later them combined, don't know how, just what i can see in project folder.

er...@ishg.co.jp

unread,
Jul 11, 2016, 7:51:25 AM7/11/16
to openMVS, er...@ishg.co.jp


On Monday, July 11, 2016 at 8:38:01 PM UTC+9, cDc wrote:
 Do you have HDR images for this dataset?

No, this is LDR source. 

cDc

unread,
Jul 11, 2016, 4:19:06 PM7/11/16
to openMVS, er...@ishg.co.jp
This is the reconstruction produced by 3Dnovator at medium resolution:
https://skfb.ly/QzTH
How do you find that this compares to the other pipelines you tried?

shaa...@gmail.com

unread,
Jul 11, 2016, 7:43:30 PM7/11/16
to openMVS
Good!
Details looks better than Agisoft Photoscan.
A little bit less details than In ContextCapture. Check the face of the sun in matcap view.

Bigest problem, that you use original untouched photos. but it mine touched.
List of problem:
Lost details on Metallic fixture in "left arm".
Face of the sun.
Gatherings on back side and its details.

That why I asked use my touched sources. Because probably 3dnovator just drop details in shadows. While in my sources I try normalize details.

Another thing in your model that annoy me too much. If you check edge of stand in matcap mode, you can find that shape of edge stand out of main shape. And my 20years experience as color corrector tell me that in some steps something oversharpen edges. If 3dnovator used the same steps as OpenMVS and what I seen in temp files. This is something sharpen depth map. May be this step that create edge mask png and later combine it with raw reconstructed depth map. May be if you can adjust "weight" of this edge in final depth map calculation, may be lower it from 100% to 50% may be, may be this will remove this effect from final mesh.
But for clean understanding we should check result from my touched source and may be in more than middle settings.

I expressly did not talking about texture generation part. Most of software use generic algorithms for this and quality not variate too much. One idea that can grow texture quality (details and uniformness) is may be change average (mean) to median in moment when tool calculate texture from many photo sources.

Btw, I wrote mail from your site, and if you can answer we can discuss, I'm not against to share some ideas that may be can help improve quality of software.


I still want test by myself. Especially want understand, is it possible rich quality better than Agisoft Photoscan in OpenMVS and if possible I really want find this settings.

And, please, make another reconstruction but from touched photos.

And, sorry about my English. It not native for me and I writing this answer in train :)))

shaa...@gmail.com

unread,
Jul 11, 2016, 10:00:38 PM7/11/16
to openMVS, er...@ishg.co.jp


On Tuesday, July 12, 2016 at 5:19:06 AM UTC+9, cDc wrote:
This is the reconstruction produced by 3Dnovator at medium resolution:
https://skfb.ly/QzTH
How do you find that this compares to the other pipelines you tried?

BTW, i think is better change license a little bit. Creative Common is ok, but no commercial usage. Scan author and museum can probably be against this. 

shaa...@gmail.com

unread,
Jul 11, 2016, 11:18:12 PM7/11/16
to openMVS, er...@ishg.co.jp
BTW, another strong competitor https://3digify.com
Them just combine common photogrammetry tips to one software.
And with 2 camera and projector setup near the edge that shift their app from photogrammetry to 3D scanner.
Saddly not possible test it with my own images, but some examples that them made with help of projector have really good mesh.

cDc

unread,
Jul 12, 2016, 4:34:06 AM7/12/16
to openMVS, shaa...@gmail.com
Thank you for the feedback, appreciated!

er...@ishg.co.jp

unread,
Jul 15, 2016, 4:51:39 AM7/15/16
to openMVS
Seen Etruscan tomb reconstruction in 3Dnovator at Sketchfab.
Looks like not enough resolution, comparing to Memento version :(. Geoffrey just bake highrez details to normal map.

BTW, which part of reconstruction is most memory consuming for OpenMVS and 3Dnovator?
Because commercial apps is bypassing this problem. And may be i can discuss with some of my friends coders and share some ideas how possible reduce memory usage.
Reply all
Reply to author
Forward
0 new messages