I have a few questions regarding my model. I am working on doing statistical shape modelling of human kidneys. I have built a ssm of the kidneys, and have sampled from the model space and played with the model parameters, so I know there is a decent amount of variation between them, but for some reason when I use the code:
val coeffs = DenseVector.zeros[Double](model.rank)
coeffs(0) = 3
val three_mesh = model.instance(coeffs)
ui.show(modelGroup, three_mesh, "+3")
coeffs(0) = 1
val one_mesh = model.instance(coeffs)
ui.show(modelGroup, one_mesh, "+1")
coeffs(0) = 2
val two_mesh = model.instance(coeffs)
ui.show(modelGroup, two_mesh, "+2")
MeshIO.writeMesh(two_mesh, new File("./data/kidney_pos_twomesh.stl"))
coeffs(0) = -3
val minus_three_mesh = model.instance(coeffs)
ui.show(modelGroup, minus_three_mesh, "-3")
coeffs(0) = -2
val minus_two_mesh = model.instance(coeffs)
ui.show(modelGroup, minus_two_mesh, "-2")
coeffs(0) = -1
val minus_one_mesh = model.instance(coeffs)
ui.show(modelGroup, minus_one_mesh, "-1")
my models generated that are a certain distance from the mean model (i.e. +1, +2, +3 STDs) are almost identical to each other. I have attached some of these figures to show what I mean. In addition, I have found code in the past scalismo conversations about how to calculate model compactness and specificity, but the code that I found prints out the specificity from the last model rank to the first, when I am trying to generate an array from the first to the last:
def computeTotalVarianceFromKLBasis(ssm : StatisticalMeshModel) : Double = {
ssm.gp.klBasis.map(basis => basis.eigenvalue).sum
}
val total_variance = computeTotalVarianceFromKLBasis(ssm)
val specArray = Array[Double]()
val compArray = Array[Double]()
(ssm.rank+0 to ssm.gp.klBasis.length-1 by +1).foreach{rank =>
val reduced_ssm = ssm.truncate(rank)
val sample = reduced_ssm.sample
val compactness = ssm.gp.klBasis.take(rank).map(_.eigenvalue).sum
val dists = meshes.map{mesh=>
val dist = avgDistance(mesh, sample)
dist
}
specArray(rank-1) = min(dists)
compArray(rank-1) = compactness/total_variance
println(rank+" components, result in "+min(dists)+" specificity")
println(rank+" components, result in "+(compactness/total_variance)+" compactness")
}
It's probably something stupid, but what am I doing wrong? Is there an easy method to do a training, and testing split from input data during the model building process in scalismo? Is the purpose of computing leaveoneoutcrossvalidation to create a model to compare your original built ssm to, or is it just another way to model data? Finally, how do you compute sufficiency in scalismo?