Hi,
I encountered something that puzzled me. In a situation where the Hessian of the log likelihood is indeed sparse in a random effect, this sparsity is lost when I calculate linear predictors via matrix-vector multiplication instead of element wise. I attach a silly minimal example below.
Now obviously I know how to avoid the sparsity loss, but I would be interested in what's going on here and if there is a way to keep sparsity even using matrix-vector multiplication (because allows more general coding).
Minimal example:
library(RTMB)
nll1 <- function(par) {
getAll(par)
mu <- beta[1] + beta[2] * x # keeps sparsity
-sum(dnorm(y, mu, 1, log = TRUE)) - sum(dnorm(x, log = TRUE))
}
nll2 <- function(par){
getAll(par)
X <- cbind(1, x)
mu <- X %*% beta # destroys sparsity
-sum(dnorm(y, mu, 1, log = TRUE)) - sum(dnorm(x, log = TRUE))
}
beta <- c(2, 1)
x <- rnorm(10)
y <- beta[1] + beta[2] * x + rnorm(10)
par <- list(beta = beta, x = x)
obj1 <- MakeADFun(nll1, par, random = "x")
obj1$env$spHess(random = TRUE) # sparse
obj2 <- MakeADFun(nll2, par, random = "x")
obj2$env$spHess(random = TRUE) # dense
Cheers