Tutorial 15 with noise as another parameter to estimate

30 views
Skip to first unread message

Maia R.

unread,
May 11, 2021, 9:27:31 AM5/11/21
to scalismo
Hi,
I'm working with the Tutorial 15 and https://scalismo.org/docs/tutorials/tutorial15 and it changed recently as the noise is also considered as parameter to be estimated by MCMC.
Could you explain the logic of this change, please ?

Thanks,
Best regards

Marcel Luethi

unread,
May 11, 2021, 11:19:28 AM5/11/21
to Maia R., scalismo
HI Maia

Sorry for having interrupted your work with that change. I did not think that somebody could be at that moment working on this tutorial.
So far I have always considered the noise to be a parameter the user has to set. However, this is always a bit difficult to really know that value and it is usually easier to just specify a prior distribution instead.  As in the sampling setting, this does not make the problem much more complicated, I thought it would be nicer to show that we have this possibility.

The larger context is that I am currently preparing more online material, which also explains a bit the ideas behind our fitting approach (see https://shapemodelling.cs.unibas.ch/probabilistic-fitting-course/). The tutorials are part of this course material, and I sometimes change them slightly to better harmonize with the theory.

Best

Marcel

--
You received this message because you are subscribed to the Google Groups "scalismo" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scalismo+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/scalismo/bd2b6c4c-5663-44fb-a76a-19b9103a3bcdn%40googlegroups.com.

V R

unread,
May 11, 2021, 12:00:17 PM5/11/21
to Marcel Luethi, scalismo
Hi Marcel,
No problem, thanks for the update. Yes, it's true that noise estimation is not straightforward.
Anyway, everything works smoothly as before with this change. 
When do  you plan to make this online course available ?

I have a question though, that may be Scala-related: could you have a look at the codes below please ?
In fact, I tried to re-factor the codes of the Tutorial 15 and put each case class in a separate Scala file (e.g. CachedEvaluator.scala, etc.) . But I don't understand why I got nothing at all compared to having all those case classes in a single script. I mean I got nothing because all  the parameters are not updated during iteration consumption. Is this related to bad class instantiation ? To memory issue (though I used CachedEvaluator) ? ... Sorry to bother you with that but I have been struggling with this for a few days and I don't understand what is happening ... 
Here are the codes:
===================
al modelLmIds = modelLms.map(l => model.mean.pointSet.pointId(l.point).get)
val targetPoints = targetLms.map(l => l.point)

val landmarkNoiseVariance = 9.0
val uncertainty = MultivariateNormalDistribution(
DenseVector.zeros[Double](3),
DenseMatrix.eye[Double](3) * landmarkNoiseVariance
)

val correspondences = modelLmIds
.zip(targetPoints)
.map(modelIdWithTargetPoint => {
val (modelId, targetPoint) = modelIdWithTargetPoint
(modelId, targetPoint, uncertainty)
})
/** prior and likelihood */
val corrEval = CorrespondenceEvaluator(model, correspondences)
val likelihoodEvaluator = CachedEvaluator(CorrespondenceEvaluator(model, correspondences))
val priorEvaluator = CachedEvaluator(PriorEvaluator(model))
val posteriorEvaluator = ProductEvaluator(priorEvaluator, likelihoodEvaluator)
/** Proposal generator    */
val iShape = ShapeUpdateProposal(model.rank, 0.1)
val iRotation = RotationUpdateProposal(0.01)
val iTranslation = TranslationUpdateProposal(1.0)

val generator = MixtureProposal.fromProposalsWithTransition(
(
0.6, iShape),
(0.2, iRotation),
(0.2, iTranslation)
)

/** Building the Markov Chain */
val initialParameters = Parameters(
EuclideanVector(0, 0, 0),
(0.0, 0.0, 0.0),
DenseVector.zeros[Double](model.rank),
0.0
)

val initialSample = Sample("initial", initialParameters, computeCenterOfMass(model.mean))
val chain = MetropolisHastings(generator, posteriorEvaluator)
val logger = new Logger()
val mhIterator = chain.iterator(initialSample, logger)

// Visualize
val samplingIterator = for ((sample, iteration) <- mhIterator.zipWithIndex) yield {
println("iteration " + iteration)
if (iteration % 500 == 0) {
modelView.shapeModelTransformationView.shapeTransformationView.coefficients = sample.parameters.modelCoefficients
modelView.shapeModelTransformationView.poseTransformationView.transformation = sample.poseTransformation
}
sample
}
val samples = samplingIterator.slice(1000, 10000).toIndexedSeq

val bestSample = samples.maxBy(posteriorEvaluator.logValue)
val bestFit = model.instance(bestSample.parameters.modelCoefficients).transform(bestSample.poseTransformation)
======

Thank you very much,
Best regards
Maia

Marcel Luethi

unread,
May 12, 2021, 10:27:59 AM5/12/21
to V R, scalismo
Hi Maia

A first version of the course is already (partly) available here:
I am currently in the process of finalizing week 4. My plan is to update and improve the material over the coming month.

Regarding your coding problem: It is good practice to split up the classes in separate parts and I always do this. It should not lead to any problems. From the code that you sent I cannot see anything that looks suspicious. the way you generate the samples look okay. Do you execute the last three lines from within the main method?

Have you tried to just do the refactoring gradually? Start with the working script, where everything is in one file and then move out individual pieces, such as the Cached evaluator first and see at what point it stops working? This should give you some indication what could be the problem.

Best regards,
Marcel

V R

unread,
May 13, 2021, 9:34:14 AM5/13/21
to Marcel Luethi, scalismo
Dear Marcel,
Thank you for your explanation. 
I will follow your advice.
Best regards
Reply all
Reply to author
Forward
0 new messages