Question about UniformMeshSampler

43 views
Skip to first unread message

Maia R.

unread,
Jul 9, 2021, 8:48:05 AM7/9/21
to scalismo
Hi,
I have a question about the main difference when I call icp iteratively with these two versions. I tried to re-factor some codes and I didn't get the same results for the two versions.
In the first version, I instatiate a sampler outside the icp code but the sampling (i.e. pointsOnTarget are both inside the icp method).
Thank you very much,
Best regards,
Maia
================ 
First version:
val sampler = UniformMeshSampler3D(partialMesh, nbOfSamplePoints)(Random(10L))

    def icp(modelInstance : TriangleMesh[_3D], partialMesh : TriangleMesh[_3D], noiseVariance : Double) : TriangleMesh[_3D] = {
      val pointsOnTarget = sampler.sample().map(_._1)
      val idTargetPointPairs = for (pointOnTarget <- pointsOnTarget) yield {
        val ptId = modelInstance.pointSet.findClosestPoint(pointOnTarget).id
        (ptId, pointOnTarget)
      }
      pdm.posterior(idTargetPointPairs, noiseVariance).mean
    }
===============
Second version:
def icp(modelInstance: TriangleMesh[_3D],partialMesh: TriangleMesh[_3D],noiseVariance: Double,nbOfPoints: Int): TriangleMesh[_3D] = {
    val sampler = UniformMeshSampler3D(partialMesh, nbOfPoints)(Random(10L))
    val pointsOnTarget = sampler.sample().map(_._1)
    val idTargetPointPairs = for (pointOnTarget <- pointsOnTarget) yield {
      val ptId = modelInstance.pointSet.findClosestPoint(pointOnTarget).id
      (ptId, pointOnTarget)
    }
    pdm.posterior(idTargetPointPairs, noiseVariance).mean
  }


Marcel Luethi

unread,
Jul 10, 2021, 10:30:50 AM7/10/21
to Maia R., scalismo
Hi Maia

In your second version you create a new sampler every time when you call ICP. As it is seeded, it will always return the same points. In the first version you create the sampler once. Whenever you call sample, a new set of points will be returned.

Best regards,
Marcel

--
You received this message because you are subscribed to the Google Groups "scalismo" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scalismo+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/scalismo/4a8a49c4-62ed-4a80-ac2a-7a64c1268d62n%40googlegroups.com.

Maia R.

unread,
Jul 12, 2021, 9:13:12 AM7/12/21
to scalismo
Hi Marcel,
I'm a bit confused: what would be the best use for each of the version ?
In Tutorial 11 (model fitting), the same points with the same point IDs are used and those points are tracked iteratively.
Best regards,
Maia

Maia R.

unread,
Jul 12, 2021, 4:39:02 PM7/12/21
to scalismo
BTW, the first version seems to give more accurate fitting than the second one for shape completion.
Best regards

Marcel Luethi

unread,
Jul 13, 2021, 3:35:32 AM7/13/21
to Maia R., scalismo
Hi Maia

I would suggest that you use the first version in this case. The second version is also not using the sampler the way it was intended to be used. If you really wanted to use the same points in every iteration, you should use a  FixedPointsUniformMeshSampler instead.

If you sample sufficiently many points, the distinction between the two strategies should be small. If you sample only a few points, my intuition tells me that randomly sampling them in each iteration will often lead to better results. But it might very well depend on the particular application.

Best regards,

Marcel

Maia R.

unread,
Jul 13, 2021, 8:46:19 AM7/13/21
to scalismo
Thank you very much Marcel.
Best regards
Maia

Maia R.

unread,
Jul 13, 2021, 5:08:29 PM7/13/21
to scalismo
Hi Marcel,
I'd like to be sure about what you explained.
How about registration (establishing correspondences) with non-rigid ICP: in https://scalismo.org/docs/tutorials/tutorial11 the same points are used by means of the same point reference IDs. Is this approach still correct ?
Best regards
Maia

Marcel Luethi

unread,
Jul 14, 2021, 3:37:19 AM7/14/21
to Maia R., scalismo
Hi Maia

Yes, both versions are correct. The difference is similar to the difference between gradient descent and stochastic gradient descent. One is deterministic and the other one has some stochastic component built into the algorithm. But both find a minimum. Which version is better cannot be judged in general, without looking at the specific problem.

Best regards,

Marcel

V R

unread,
Jul 14, 2021, 10:07:39 AM7/14/21
to Marcel Luethi, scalismo
Thank you very much. Very good similarity.
Best regards,
Maia
Reply all
Reply to author
Forward
0 new messages