change point kernel low rank GP computation issue

21 views
Skip to first unread message

Sybren van Rijn

unread,
Dec 11, 2024, 12:03:11 PM12/11/24
to scalismo
Goal: Define a fine kernel active only over the anterior rim of the fibula shaft, the scale is based on the distance from the closest point of a set of points along the anterior rim.  

Problem: The low-rank GP computation for this custom kernel is extremely slow, not completing overnight.

Reference mesh: 8346 points
Precomputed scales: 1864 points with non-zero scale

Kernel Definition:
val fibulaKernel = new MatrixValuedPDKernel[_3D]() {
val rimKernel = DiagonalKernel3D(GaussianKernel3D(10) * 30.0)

def k(x: Point[_3D], y: Point[_3D]): DenseMatrix[Double] = {
val scaleX = precomputedScales.getOrElse(x, 0.0)
val scaleY = precomputedScales.getOrElse(y, 0.0)
if (scaleX == 0.0 || scaleY == 0.0) {
DenseMatrix.zeros[Double](3, 3)
} else {
rimKernel(x, y) * scaleX * scaleY
}
}

override def domain = EuclideanSpace[_3D]
override val outputDim = 3
}

precomputedScales is a map containing only the non-zero scales. 

The computation of the GP is quick. I have tried both the Nystrom method with a few points and the Cholesky method with a high relative tolerance and the NearestNeighborInterpolator. However, it doesn't compute even overnight.  Interestingly, if I define the same kernel over the entire mesh, the computation is really quick.

Does anyone know how to mitigate this issue? My linear algebra skills aren't the strongest, and I would appreciate any advice or suggestions on how to improve the performance of the low-rank GP computation.

Thanks in advance and kind regards,
Sybren van Rijn

Marcel Luethi

unread,
Dec 12, 2024, 7:08:15 AM12/12/24
to Sybren van Rijn, scalismo
Dear Sybren

I can't tell you from looking at the code what makes it slow. However, there are several ways that make any computation faster:
1. Reduce the number of points on which it is computed
2. Increase smoothness
3. Decrease the number of basis functions (i.e. increase the tolerance in the cholesky).

I would first start with a model that is so coarse that it computes in seconds and then slowly refine it and make it more detailed. In this way you might understand better where the computational costs come from.

Best regards,
Marcel

--
You received this message because you are subscribed to the Google Groups "scalismo" group.
To unsubscribe from this group and stop receiving emails from it, send an email to scalismo+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/scalismo/f8e88741-2323-4e60-b0dc-bc63cab9b095n%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages