Re: 3D representation of 2D fitted model

117 views
Skip to first unread message
Message has been deleted

Bernhard Egger

unread,
Jan 17, 2018, 4:17:49 AM1/17/18
to scalismo-faces
Hi Jerry,

the result of the fit is a set of RenderParameters. 
In the Tutorial (05_FaceFitting_06_ImageFitting) you have a line like this:
val bestSample = posteriorSamples.maxBy{ case Sample(it, x, value) => value } overlay.render(bestSample.sample)
This will render the fit back to an image.
What you want instead of redering is the set of paramters. Those are in bestSample.sample. You could also assign them to a variable which will be of type RenderParameters.
The RenderParameters contain everything to generate the scene and render an image (shape, color and expression parameters, as well as pose, camera and illumination).
You can then either change the pose, illumination or whatever you want on the RenderParameter and generate an image out of it (using modelRenderer.renderImage). Or you can render the 3D mesh using modelRenderer.renderMesh.

The modelRenderer you have in the tutorial:
val momoFile = new File("data/model2017-1_face12_nomouth.h5") 
val momoURI = momoFile.toURI 
val momo = MoMoIO.read(momoURI).get.neutralModel 
val modelRenderer = MoMoRenderer(momo)

Best
Bernhard

Am Dienstag, 16. Januar 2018 22:21:59 UTC+1 schrieb Jerry Liu:
Hi there,

Sorry about another question so soon. I've been going through all of the tutorial, and I am really impressed by the GUI and visualization of the 3D model. In addition, I tried the fitting of the 2D image and the results for your example on ws_13.png was great. However, I can't seem to find information on how to use the fit after it is run on the ws_13.png image and visualize the 3D version  of it. It seems the fit is just a 2D image and not a 3D model. Could you help guide me in this process?

Thanks again,
Jerry
Message has been deleted

Andreas Forster

unread,
Jan 18, 2018, 8:52:23 AM1/18/18
to Jerry Liu, scalismo-faces
Hi Jerry

Code that is not part of the library you can find in hopefully all cases in the  chapter Appendix under  List of Helper Functions. There you can find the function createValueLogger. The guiImageLogger is a variable that is used often in the tutorial. The first two occurrences are in chapter 5.3 and chapter 6.3.

In the ScalismoUI you can yet only visualize shapes not meshes with vertex color or texture. To generate a mesh given that you have a RenderParameter and a MoMo you can use the following code:
val momo: MoMo = ???
val rps: RenderParameter = ???
val mesh: VertexColorMesh3D = momo.instance(rps.momo.coefficients)
val shape: TriangleMesh3D = mesh.shape
The variable shape you should then be able to display in the UI with something like:
val ui = ScalismoUI()
val grp = ui.createGroup("mesh")
ui.show(grp,shape,"shape-only")
To view the mesh in 3d with color write the mesh to the disk and visualize it in for example using MeshLab. The code to write a mesh is:
MeshIO.write(mesh,new File("mesh.ply"))

I hope this helps.

Best regards
Andreas

On Thu, Jan 18, 2018 at 2:32 PM, Jerry Liu <3jer...@gmail.com> wrote:
Hi Bernhard,

I apologize, I think I may have not been clear enough in my explanation. 

My problem is 2 part: 

Firstly, I am now coding in a separate IDE (IntelliJ) and have noticed that the different types of Logger (createValueLogger, guiImageLogger) are not available. Is there a specific library I need to obtain them, since they seem import for the 3D model fit generation.

Secondly, I am aware of visualizing 3D meshes on the image panel, I thought that feature worked quite well. However, I wanted to take the mesh generated by the fitting and visualize it on the 3D UI that was demonstrated in the beginning of the tutorial. I saw that they were usually generated using a line like val referenceMesh = MeshIO.readMesh(new File("data/facemesh.stl")).get and not from a RenderParameter. Could you help me with generating a mesh from a RenderParameter or visualizing one directly to the ScalismoUI()?

Thanks so much,
Jerry

--
You received this message because you are subscribed to the Google Groups "scalismo-faces" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scalismo-faces+unsubscribe@googlegroups.com.
To post to this group, send email to scalismo-faces@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/scalismo-faces/06fb04d7-4ba5-4e49-a80a-3b5465fdf564%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
*****************************************
Dr. Andreas Morel-Forster
Departement Mathematik und Informatik
Spiegelgasse 1
CH-4051 Basel
PHONE: +41 61 207 05 52
MAIL: Andreas.Forster@unibas.ch
*****************************************
Message has been deleted

Bernhard Egger

unread,
Jan 18, 2018, 5:13:47 PM1/18/18
to scalismo-faces
Hi Jerry,

we do not provide face or feature point detection - you will need an external library to do this.

To reach best results we would recommend to use the fitscript implementing all the details of our publication:
Markov Chain Monte Carlo for Automated Face Image Analysis
Sandro Schönborn, Bernhard Egger, Andreas Morel-Forster and Thomas Vetter
International Journal of Computer Vision 123(2), 160-183 , June 2017

You can find that fitscript here:

This fitscript expects some externelly detected landmarks points.

Bernhard Egger

unread,
Jan 18, 2018, 5:15:44 PM1/18/18
to scalismo-faces
Hi again,

instead of detecting them, you can also manually label those landmark points with a simple labeling tool:
https://github.com/unibas-gravis/landmarks-clicker
Reply all
Reply to author
Forward
Message has been deleted
0 new messages