I assume you are still in the fitting scenario. When you fitted the image you get as one result a full set of
RenderParameters. This class directly provides you with a transformation from world coordinates to image coordinates:
val rps: RenderParameter = ???
val point: Point[_3D] = ???
val pointInImage = rps.renderTransform(point)
In principle, you could calculate the mapping for every point of the mesh and store the texture coordinates as the texture mapping. However, this is not a good idea as it does not provide a distinct location for every point on a face. What you need is an embedding of the mesh into a 2d plane. Look for texture embedding or mesh parameterization. Used methods are Multi Dimensional Scaling, Laplacian Eigenmaps or Discrete Conformal Mappings etc. We started recently with a branch (meshParameterization) where some of the math is provided.
I hope that helps.