# Augmented model processing time (Tutorial 7 vs. pdm)

6 views

### Maia R.

Mar 2, 2021, 5:19:23 PMMar 2
to scalismo
Dear community,

I don't understand why the processing time of  those two sample codes is too different from each other.

//  very slow codes (I followed https://scalismo.org/docs/tutorials/tutorial7)
def buildAugmentedPDM(pcaModel: PointDistributionModel[_3D, TriangleMesh], genericKernel: MatrixValuedPDKernel[_3D])  :
PointDistributionModel[_3D, TriangleMesh]= {

val gpSSM: LowRankGaussianProcess[_3D, EuclideanVector[_3D]] =     pcaModel.gp.interpolate(TriangleMeshInterpolator3D()
val covSSM: MatrixValuedPDKernel[_3D] = gpSSM.cov
val augmentedCov: MatrixValuedPDKernel[_3D] = covSSM + genericKernel
val augmentedGP = GaussianProcess(gpSSM.mean, augmentedCov)
val lowRankAugmentedGP: LowRankGaussianProcess[_3D, EuclideanVector[_3D]] = LowRankGaussianProcess.approximateGPCholesky(
referenceMesh,
augmentedGP,
relativeTolerance = 0.01,
interpolator = TriangleMeshInterpolator3D[EuclideanVector[_3D]]()
)
val augmentedPDM = PointDistributionModel(pcaModel.reference, lowRankAugmentedGP)
augmentedPDM
}

// Fast codes (20 x faster that above)
// val gp is a simple diagonal kernel GP
val biasModel: LowRankGaussianProcess[_3D, EuclideanVector[_3D]] = LowRankGaussianProcess.approximateGPCholesky(
referenceMesh,
gp,
relativeTolerance = 0.01,
interpolator = TriangleMeshInterpolator3D[EuclideanVector[_3D]]()
)
val augmentedPDM = PointDistributionModel.augmentModel(pcaModel, biasModel)

Here, pcaModel is a model from data.
The main difference I see here is that in first place, the lowRankProcess is computed with the augmented GP while for the second case, only the diagonal GP is used.

Thank you very much,
Best regards,
Maia

### Marcel Luethi

Mar 3, 2021, 1:58:39 AMMar 3
to Maia R., scalismo
Hi Maia

The difference between the two versions is, that in the first version the low-rank approximation is done using the combined kernel. This is the general strategy, which works for combining any two kernels.
In the second version, the low-rank computation is only computed for the bias model. Since this is usually quite smooth, this does not take long. The two processes are then combined.
The result should be identical.

Best regards,
Marcel

--
You received this message because you are subscribed to the Google Groups "scalismo" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scalismo+u...@googlegroups.com.