Forgot to ask yesterday.
As I know refine mesh have hidden settings that allow limit area of calculation. Can this help on speed and memory requirements?
And please, can you tell how pass required parameters for this settings? Is this some kind of bounding box with 2-3 corners coordinates defined or anything alse?
I want try this settings, because for this moment I test openMVS on photos from museum with object about 2m high and room about 10-20m long. Too many unneeded data.
And another question about masks.
If Photoscan strongly required masks because used outdated algorithms that main goal is point cloud, and without masks on photos with not perfect DOF produce too much noise.
But as I understood from research papers that used in OpenMVS, may be better not use masks (fill unmixed area with flat color). Because them can hide some data required for weak surface calculation and surface refinement?
About me:
I'm more than 20 years experience graphic/3D/UI/etc designer. Windows/OSX/FreeBSD skilled user. And now photogrammetry enthusiast. Some years ago start living and work in Japan.
My interest in photogrammetry tool is mostly hobby. I mostly interesting in close range scans. Sculptures, or may be small buildings anything that possible scan without drones.
Goal that I want rich in my projects is something about 1-10 mm resolution on objects about 2-10meters (0.01~0.1% of object size). May be I want too much but... :)
I can sacrifice speed for quality. Also I work many ears as FreeBSD admin so command line is not a big problem for me.
About 3Dnovator. I want test it, but looks like it can not be free. :(
Btw, if this your startup, and if you need any help or may be need skilled UI/UX designer in your growing team... ;)
Could you please share the original photos, not manipulated in LightRoom etc?
(by the way, this is bad practice when it comes to photogrammetry)
Thanks!
You are completely right, however any post-processing meant to improve the 3D reconstruction process is the responsibility of the 3D reconstruction software and not the user. For instance 3Dnovator automatically adjusts the images (like lighten/darken, etc) as it sees fit.
I do not understand the last question. I do not use edges in OpenMVS.
Do you have HDR images for this dataset?
Bigest problem, that you use original untouched photos. but it mine touched.
List of problem:
Lost details on Metallic fixture in "left arm".
Face of the sun.
Gatherings on back side and its details.
That why I asked use my touched sources. Because probably 3dnovator just drop details in shadows. While in my sources I try normalize details.
Another thing in your model that annoy me too much. If you check edge of stand in matcap mode, you can find that shape of edge stand out of main shape. And my 20years experience as color corrector tell me that in some steps something oversharpen edges. If 3dnovator used the same steps as OpenMVS and what I seen in temp files. This is something sharpen depth map. May be this step that create edge mask png and later combine it with raw reconstructed depth map. May be if you can adjust "weight" of this edge in final depth map calculation, may be lower it from 100% to 50% may be, may be this will remove this effect from final mesh.
But for clean understanding we should check result from my touched source and may be in more than middle settings.
I expressly did not talking about texture generation part. Most of software use generic algorithms for this and quality not variate too much. One idea that can grow texture quality (details and uniformness) is may be change average (mean) to median in moment when tool calculate texture from many photo sources.
Btw, I wrote mail from your site, and if you can answer we can discuss, I'm not against to share some ideas that may be can help improve quality of software.
I still want test by myself. Especially want understand, is it possible rich quality better than Agisoft Photoscan in OpenMVS and if possible I really want find this settings.
And, please, make another reconstruction but from touched photos.
And, sorry about my English. It not native for me and I writing this answer in train :)))
This is the reconstruction produced by 3Dnovator at medium resolution:
https://skfb.ly/QzTH
How do you find that this compares to the other pipelines you tried?