Hello,
I'm trying to get photo sets that use clay to hold artifacts to properly align. I have photo sets that were taken with the purpose of working in Metashape and/or RealityScan (see image below) that I want to test in Meshroom.
These sets consist of two groups (chunks) that correspond to the object
being oriented in opposite directions. I'm accustomed to the
Metahspae/RealityScan workflows where you clip parts of the mesh before aligning and merging chunks (Metashape) or exporting masks from partial models to then use for a final model (RealityScan).
I've tried several workflows (including some of the new pipelines) but keep running into issues. I'm familiar with the approach by Alban Brice Pimpaud, which makes me think the problem is the clay base in these photo sets? I'm accustomed to using the generate masks features in Metashape/RealityScan to loop through and produce higher quality models. Since that feature/workflow doesn't exist in Meshroom, how do people accomplish this?
I have also had luck processing each set independently, exporting the models, and completing a full 3D model with Meshlab. I think I'm just missing some steps (and perhaps nodes)?
Thanks in advance for any help!