Augmented model processing time (Tutorial 7 vs. pdm)

27 views
Skip to first unread message

Maia R.

unread,
Mar 2, 2021, 5:19:23 PM3/2/21
to scalismo
Dear community,

I don't understand why the processing time of  those two sample codes is too different from each other.

//  very slow codes (I followed https://scalismo.org/docs/tutorials/tutorial7)
def buildAugmentedPDM(pcaModel: PointDistributionModel[_3D, TriangleMesh], genericKernel: MatrixValuedPDKernel[_3D])  :
PointDistributionModel[_3D, TriangleMesh]= {
   
    val gpSSM: LowRankGaussianProcess[_3D, EuclideanVector[_3D]] =     pcaModel.gp.interpolate(TriangleMeshInterpolator3D()
    val covSSM: MatrixValuedPDKernel[_3D] = gpSSM.cov
    val augmentedCov: MatrixValuedPDKernel[_3D] = covSSM + genericKernel
    val augmentedGP = GaussianProcess(gpSSM.mean, augmentedCov)
    val lowRankAugmentedGP: LowRankGaussianProcess[_3D, EuclideanVector[_3D]] = LowRankGaussianProcess.approximateGPCholesky(
      referenceMesh,
      augmentedGP,
      relativeTolerance = 0.01,
      interpolator = TriangleMeshInterpolator3D[EuclideanVector[_3D]]()
    )
    val augmentedPDM = PointDistributionModel(pcaModel.reference, lowRankAugmentedGP)
    augmentedPDM
  }
 
  // Fast codes (20 x faster that above)
  // val gp is a simple diagonal kernel GP    
  val biasModel: LowRankGaussianProcess[_3D, EuclideanVector[_3D]] = LowRankGaussianProcess.approximateGPCholesky(
      referenceMesh,
      gp,
      relativeTolerance = 0.01,
      interpolator = TriangleMeshInterpolator3D[EuclideanVector[_3D]]()
    )
  val augmentedPDM = PointDistributionModel.augmentModel(pcaModel, biasModel)

Here, pcaModel is a model from data.
The main difference I see here is that in first place, the lowRankProcess is computed with the augmented GP while for the second case, only the diagonal GP is used.

Thank you very much,
Best regards,
Maia

Marcel Luethi

unread,
Mar 3, 2021, 1:58:39 AM3/3/21
to Maia R., scalismo
Hi Maia

The difference between the two versions is, that in the first version the low-rank approximation is done using the combined kernel. This is the general strategy, which works for combining any two kernels.
In the second version, the low-rank computation is only computed for the bias model. Since this is usually quite smooth, this does not take long. The two processes are then combined.
The result should be identical.

Best regards,
Marcel


--
You received this message because you are subscribed to the Google Groups "scalismo" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scalismo+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/scalismo/4ac08a8d-19c3-48c4-b55e-837dbf9c8afan%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages