Implausible posterior correlations with multivariate BYM2

112 views
Skip to first unread message

Leonardo Cefalo

unread,
Jul 17, 2025, 5:05:25 AMJul 17
to r-inla-disc...@googlegroups.com
Dear INLA experts

I am fitting an M-model extension to the BYM model, roughly based on the paper by Urdangarin et al., 2024.
For practical reasons, I have to employ the sparse parametrisation of the BYM, i.e. model jointly (Y, U), extending the framework of Riebler et al. (2016) to the multivariate case (three-variate in this case).

However, I get strange results for the correlation matrix of the spatial effects, e.g. the correlation between the 1st and 3rd BYM field results about -0.92 in mean. This does not appear consistent with the data (trivially speaking, observations are not negatively correlated).

More 'familiar' models like the M-model extension of the Leroux CAR give indeed different results. The LCAR is closely based both in terms of prior choice and coding on the bigDM package, which implements some M-models in INLA and is documented in a paper by Vicente et al. (2023)

In both cases (LCAR and BYM) I use a Uniform prior on the mixing parameter (precision mixing and variance mixing respectively) and  the Bartlett decomposition on the covariance matrix, as outlined by Vicente et a. (2023).

I attach the R code and the data. BYM implementation is at lines 227 and below.

Could I please ask for some advice regarding the BYM implementation/results? 

Thank you in advance 
LC

  --

P.S. (1)
For context: data regard the accesses to anti-violence centers in three different years; the records of each year are treated as a single variable
P.S. (2)
I do not think it is necessary or needful but just in case I attach a short PDF note deriving the sparse parametrisation of the multivariate BYM 

 


BYM2 replication.R
Sparse M-model BYM.pdf
realdata_repl.RData

Leonardo Cefalo

unread,
Jul 29, 2025, 7:28:08 AMJul 29
to r-inla-disc...@googlegroups.com
Hi, Sorry if I step in again.

To the best of my guess, it seems that for complex models like the BYM the algorithm to find the mode ends somewhat too early and the posterior is too sensitive to initial values.
The variance-covariance matrix has a Wishart prior. 
Providing as initial values the posterior mode of a simple model, e.g. the intrinsic CAR, the model - trivially - seems to run well.

A reproducible example is attached

Besides from that, I tried raising the degrees of freedom or using a scale parameter for the Wishart prior other than the identity matrix, but as far as I see it is not sufficient to solve the problem.

Could I please ask for a hint on why this happens, and possibly how to fix this in a more rigorous way? Thank you in advance

Best, 
LC














BYM2 replication_v2.R

Håvard Rue

unread,
Jul 29, 2025, 8:12:14 AMJul 29
to Leonardo Cefalo, r-inla-disc...@googlegroups.com
the simulated example is pretty extreme... ???\

> range( beta0 + beta1*X + Z.true)
[1] -4.184915371 8.850832871

> range(y.sim)
[1] 0 6948


this is what you want?
> --
> You received this message because you are subscribed to the Google Groups "R-
> inla discussion group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to r-inla-discussion...@googlegroups.com.
> To view this discussion, visit
> https://groups.google.com/d/msgid/r-inla-discussion-group/CAEU1rMsUsztqG2-j1D2%3Dz9iDHdLF3sDihRDCvV9hEjwGs3kBOg%40mail.gmail.com
> .

--
Håvard Rue
hr...@r-inla.org

Håvard Rue

unread,
Jul 29, 2025, 8:16:49 AMJul 29
to Leonardo Cefalo, r-inla-disc...@googlegroups.com


On Tue, 2025-07-29 at 13:27 +0200, Leonardo Cefalo wrote:
Hi, Sorry if I step in again.

To the best of my guess, it seems that for complex models like the BYM the
algorithm to find the mode ends somewhat too early and the posterior is too
sensitive to initial values.

The easy way out is to add a 'rerun' step, like

mod.MMBYM <- inla(
Y ~ 1+ X+
f(ID, model = inla.MMBYM.model(k = 4, W = W, df=6, PC = F,
initial.values = c(0, 0, 0, 0, mod.IMCAR$summary.hyperpar$mode)),
extraconstr = constr.BYM),
family = "poisson", data = dd,
control.compute = list(internal.opt = F, cpo = T, waic = T, config = T),
verbose = T)
mod.MMBYM <- inla.rerun(mod.MMBYM)
--
Håvard Rue
hr...@r-inla.org

Håvard Rue

unread,
Jul 29, 2025, 8:49:05 AMJul 29
to Leonardo Cefalo, r-inla-disc...@googlegroups.com

otherwise, I would change the initial values

initial <- function(){
if(!exists("initial.values", envir= envir )){
return(c(rep(0, k*(k+3)/2)))
} else {
return(initial.values)
}
}


the latent model need to initialize as to-tight, like

log(precision) = 4

is normally used, as the model needs to adapt to data, and not the other way
around.

can you make the initial values so that the model is more tight ?

Leonardo Cefalo

unread,
Jul 31, 2025, 12:08:53 PMJul 31
to Håvard Rue, r-inla-disc...@googlegroups.com
Hi Håvard, thank you for your precious advice

Indeed, setting tighter initial values for the covariance matrix (standard deviation = -2, correlations = 0, plus logit-mixing parameter = -3) avoided the abrupt stop; using inla.rerun() would not change very much the model. True predictor range has been restricted a little bit as well.

Comparing the joint posterior at the modal configurations of the two models, the one with tight initial values and the one with initial values plugged in from the ICAR, the former appears to be the one actually managing to locate the mode, even though marginal posteriors are totally messed up.  

I don't know if it is the right way to retrieve the modal configuration, I simply tried: 
   mode.idx <- which.max(unlist(lapply(model$misc$configs$config, function(x) x$log.posterior)))
   model$misc$configs$config[[mode.idx]]$theta ## ==> the mode
   model$misc$configs$config[[mode.idx]]$log.posterior ## ==> maximum of the posterior
 
Looks like the model is somewhat so complex (14 hyperparameters are a lot indede) that the joint posterior mode is far from the true value of hyperparameters. This does not happen, e.g. in the case of four independent BYMs (correlations forced to 0). Could I please ask for some advice on how to solve this further issue, if it can be solved?

In case it may be useful another reproducible example is attached (also available on github)

Thank you again
Best,
LC






BYM2 replication_v4.R

Havard Rue

unread,
Jul 31, 2025, 1:05:47 PMJul 31
to Leonardo Cefalo, r-inla-disc...@googlegroups.com
The function log.prior return a the log prior for theta, not the interpreted vector, so for wishart prior you’ll need the jacobian as well, right? 

-- 
Håvard Rue 

Leonardo Cefalo

unread,
Aug 1, 2025, 1:05:13 PMAug 1
to Havard Rue, r-inla-disc...@googlegroups.com
Hi Havard, thank you so much for taking charge of the issue and scanning my R codes; 

Additionally to the Bartlett decomposition I tried implementing the Wishart prior directly; in the former case there was no Jacobian to be computed since theta included the log-sqrt of diagonal elements in the Bartlett factor, and the off-diagonal entries (no change of variable applied in this latter case), and priors are assigned to the scalar elements collected in the Bartlett factor, hence in the code no matrix-valued prior is defined.

For the Wishart prior directly implemented on the var-covar matrix (without resorting to Bartlett factorization) the Jacobian determinant of the transformation from the set of standard deviations and correlations to the var-cov should be, if I don't get calculations wrong, simply 2^k times the product of standard deviations raised to k. 

The problem is that as soon as I try the direct Wishart prior, model fails; here is a code example:

##' Simulate BYM model ---------------------------------------------------------
#' Neighbourhood matrix  ==> binary
load(url("https://github.com/lcef97/CAV_Puglia/raw/main/input/Replication/W.RData"))
#' Source functions to implement models; Link to R code: 'https://github.com/lcef97/CAV_Puglia/blob/main/Auxiliary/Functions.R'
devtools::source_url("https://raw.githubusercontent.com/lcef97/CAV_Puglia/refs/heads/main/Auxiliary/Functions.R")

n <- nrow(W)

#' Hyperparameters ------------------------------------------------------------#
#' Mixing
phi.true <- c(0.6, 0.4, 0.9,  0.7)
#' Standard deviations
sigma.true <- c(0.7, 1.1, 0.8, 0.6)
# Correlations
corr.true <- c(0.9, 0.7, 0.5, 0.6, 0.3, 0.7)
#' Variance-covariance matrix
Scale.true <- varcov(c(sigma.true, corr.true), tri=F)
#' M-matrix
M.true <- diag(sqrt(eigen(Scale.true)$values)) %*% t(eigen(Scale.true)$vectors)

#' Add covariate --------------------------------------------------------------#
set.seed(11)
X <- runif(n=4*n, min = -sqrt(3), max = sqrt(3))
beta0 <- 2
beta1 <- -0.6

#' Simulate BYM field ---------------------------------------------------------#
L.unscaled <- Matrix::Diagonal(rowSums(W), n=n) - W
L <- INLA:::inla.scale.model.bym.internal(L.unscaled)$Q
constr <- INLA:::inla.bym.constr.internal(L)
constr.BYM <- list(A = cbind(Matrix::Matrix(0, nrow=4, ncol = 4*n),
                             kronecker(diag(4), constr$constr$A)),
                   e = c(0, 0, 0, 0))

set.seed(11)
#' IID component
V.true <- matrix(rnorm(n=4*n, mean=0, sd=1), nrow = n, ncol = 4, byrow=F)
#' ICAR component
U.lst <- lapply(c(1:4), function(i) {
  MASS::mvrnorm(n=1, mu=rep(0,n), Sigma=INLA:::inla.ginv(L)+Matrix::Diagonal(n=n, x=1e-7))
  })
U.true <- scale(do.call(cbind, U.lst))
#' BYM2 process - weighted sum of IID and ICAR, then scaled by M  
Z.true <- as.vector(U.true %*% diag(sqrt(phi.true)) %*% M.true +
  V.true %*% diag(sqrt(1 - phi.true)) %*% M.true)

#' Simulation of the observed variable ----------------------------------------#
y.sim <- numeric(length(Z.true))
for(i in seq(length(y.sim))) {
  eta <- beta0 + beta1*X[i] + Z.true[i]
  y.sim[i] <- rpois(n=1, lambda = exp(eta) )
}

dd <- data.frame(Y = y.sim, ID = c(1:(4*n)), X=X, group=rep(c(1:4), each=n))

##' Toy examples: simpler models -----------------------------------------------
#' Easiest: IMCAR -------------------------------------------------------------#

mod.IMCAR <- inla(
  Y ~ 1 + X +
    f(ID, model = inla.IMCAR(k = 4, W = W, df=8, Bartlett = F),
      extraconstr = list(A=constr.BYM$A[,-c(1:(4*n))], e = c(0,0,0,0)) ),
  family = "poisson", data = dd, num.threads = 1,

  control.compute = list(internal.opt = F, cpo = T, waic = T, config = T),
  verbose = T)

Last messages before crashing are, with safe = T; theta values would even seem fine...
maxld= -3147.0056 fn=  2 theta= 0.2119 0.6599 0.3288 0.0138 3.1600 1.7565 1.0884 1.1842 0.5529 2.2920 [3.11, 0.250]
maxld= -3147.0054 fn=  3 theta= 0.2120 0.6603 0.3284 0.0139 3.1599 1.7565 1.0882 1.1840 0.5531 2.2919 [3.11, 0.184]


	GitId: Version_25.05.07
	Error:12 Reason: The Newton-Raphson optimizer did not converge
	Message: Condition `lambda < 1.0 / lambda_lim' is not TRUE
	Line:1028 Function: GMRFLib_init_GMRF_approximation_store__intern

P.S: Thank you also for suggesting the numDeriv R package

Il giorno ven 1 ago 2025 alle ore 07:43 Havard Rue <hr...@r-inla.org> ha scritto:
The numDeriv package has a jacobian function that  works well. The you just need to add log(det()) of that one

-- 
Håvard Rue 
On 31 Jul 2025 at 23:08 +0200, Havard Rue <hr...@r-inla.org>, wrote:
Yes.  You need the prior wrt theta not the covmat or prec mat itself 

-- 
Håvard Rue 
On 31 Jul 2025 at 19:59 +0200, Leonardo Cefalo <leonardo...@gmail.com>, wrote:
To ensure positive definiteness, rather than modelling directly the entries in the var-cov matrix I use the Bartlett decomposition and set a χ² prior on the squares of the diagonal and a std Normal on the lower entries of the triangular factor, as proposed e.g. by Vicente et al. (2023) (this prior choice has been proved to imply that var-cov follows a Wishart prior )

Am I missing something out with this approach?


Håvard Rue

unread,
Aug 1, 2025, 1:36:46 PMAug 1
to Leonardo Cefalo, r-inla-disc...@googlegroups.com

I think you have found the source of the problem. try to implement the jacobian,
just using the mapping theta -> cov.mat or prec.mat, and use numDeriv::jacobian
to compute it, and then you can check your math.
--
Håvard Rue
hr...@r-inla.org

Leonardo Cefalo

unread,
Aug 11, 2025, 4:04:52 PMAug 11
to Havard Rue, r-inla-disc...@googlegroups.com
Done; I removed the group temporarily in order due to thread length
Best,
LC

Il Lun 11 Ago 2025, 21:59 Havard Rue <hr...@r-inla.org> ha scritto:
Thx for the good news.   Please cc the list 

-- 
Håvard Rue 
On 11 Aug 2025 at 20:49 +0200, Leonardo Cefalo <leonardo...@gmail.com>, wrote:
Hi Havard,

I think I solved the issue. Negative initial values close to -1 were necessary also for correlations. Then INLA manages to handle as much as 14 hyperparameters. 
Thank you very much for all the massive support. The least I can do is acknowledge your help in any research work involving these models.

Maybe I can copy the discussion group into the loop, if the whole thread can be useful for other researchers.

Gratefully, 
LC

Il giorno gio 7 ago 2025 alle ore 21:51 Havard Rue <hr...@r-inla.org> ha scritto:
If u add replications you can check that the true values are recovered.  I still don’t understand why the default one does not converge to the good solution 

-- 
Håvard Rue 
On 7 Aug 2025 at 21:39 +0200, Håvard Rue <hr...@r-inla.org>, wrote:
I must admit we had bad experience with the Wishart before, and this is why
we're moving to a new PC-prior based priors for smaller correlation/cov/prec
matrices. We can also do graph-based ones also now....

On Thu, 2025-08-07 at 20:10 +0200, Leonardo Cefalo wrote:
By checking the code I only forgot to remove tolerance = 1e-7 from the default
model (the one performing badly)

Il Gio 7 Ago 2025, 20:08 Håvard Rue <hr...@r-inla.org> ha scritto:
> and the two models are identical except for initial values ?
>
>
> On Thu, 2025-08-07 at 19:16 +0200, Leonardo Cefalo wrote:
> >
> > It still gets stuck in the enable early_stop "loop"; paradoxically, this
> > time
> > correlation is even overestimated:
> > > vcov_summary(mod.MMBYM)
> > $cor
> >            mean         sd quant0.025  quant0.5 quant0.975
> > rho12 0.2922491 0.08834163  0.0759594 0.3086077  0.4181434
> >
> > $var
> >        mean        sd quant0.025 quant0.5 quant0.975
> > X1 2.791213 0.4169287   2.064897 2.759818   3.695607
> > X2 6.024690 1.9879610   2.940733 5.760166  10.647586
> >
> > The guided version is perfect instead
> > > vcov_summary(mod.MMBYM.guided)
> > $cor
> >             mean        sd quant0.025   quant0.5 quant0.975
> > rho12 0.02346468 0.1264466 -0.2251177 0.02451337  0.2667643
> >
> > $var
> >         mean        sd quant0.025  quant0.5 quant0.975
> > X1 2.5268294 0.3561742  1.9291348 2.4903265   3.320504
> > X2 0.9779752 0.1947421  0.6553587 0.9573342   1.415096
> >
> >
> > Also the guided model has a slightly higher posterior at the mode (-1736
> > versus -1741 I remember)
> > Here the
> > code: 
> > https://github.com/lcef97/CAV_Puglia/blob/main/Other/inla_ticket/BYM2%20
> > replication_v8_2d_NOcor.R
> >
> > Il giorno gio 7 ago 2025 alle ore 18:40 Havard Rue <hr...@r-inla.org> ha
> > scritto:
> > >
> > >
> > >
> > >
> > > It’s strange if it’s persists with k=2.   Try with k = 2 and zero
> > > correlation ? 
> > >
> > > -- 
> > > Håvard Rue 
> > > hr...@r-inla.org
> > > On 7 Aug 2025 at 17:31 +0200, Leonardo Cefalo
> > > <leonardo...@gmail.com>,
> > > wrote:
> > > > Thank you very much Håvard
> > > >
> > > > The 'guided' model having a higher joint posterior changes many things
> > > > So I tried with 2 and 3 variables, and the same problem persists. The
> > > > good
> > > > thing is it means the problem is not caused by the high number of
> > > > parameters
> > > >
> > > > It involves basically all M-models defined in the Functions file, the
> > > > ones
> > > > implemented using bigDM (an R package specific for M-models on which
> > > > those
> > > > functions are inspired), plus the scalar-mixing version of the BYM,
> > > > which
> > > > still relies on that M matrix which is defined such that M' M = \Sigma
> > > > and
> > > > gives the name to M-models; for convenience I use M = eigenvalues^1/2
> > > > *
> > > > t(eigenvectors)
> > > >
> > > > So maybe M-models are not appropriate for this kind of application, or
> > > > some caveats are needed when applying them 
> > > >
> > > >
> > > >
> > > >
> > > > Il giorno gio 7 ago 2025 alle ore 13:27 Håvard Rue <hr...@r-inla.org>
> > > > ha
> > > > scritto:
> > > > >
> > > > > I tried reading the code again, and rerun it (removing set.seed,
> > > > > etc),
> > > > > and
> > > > > trying different initial values. I'm unable to get mod.MMBYM and
> > > > > mod.mmBYM.guided equal, and typical get something like
> > > > >
> > > > > > mod.MMBYM$mode$log.posterior
> > > > > [1] -3340.13567
> > > > >
> > > > > > mod.MMBYM.guided$mode$log.posterior
> > > > > [1] -3121.828637
> > > > >
> > > > > so the guided one is far better. oops: you need to extract the
> > > > > log.posterior as
> > > > > this and not what you did in the code. the reason is that `config'
> > > > > stores things
> > > > > a little different as
> > > > >
> > > > > > r=inla(y ~ 1, data=data.frame(y=rnorm(10)),
> > > > > > control.compute=list(config=TRUE))
> > > > > > r$mode$log.posterior.mode
> > > > > [1] -21.95455314
> > > > >
> > > > > and
> > > > >
> > > > > > r$misc$configs$max.log.posterior
> > > > > [1] -21.95455316
> > > > >
> > > > > wheras the log.posterior for each config is relative to the mode
> > > > > (r$misc$configs$max.log.posterior)
> > > > >
> > > > > > r$misc$configs$config[[1]]$log.posterior
> > > > > [1] -4.215996335
> > > > >
> > > > >
> > > > > its hard to grasp all the details of your implementation. If you
> > > > > want to
> > > > > check
> > > > > for consistency, then I would try to solve the 2-dimentional case
> > > > > first
> > > > > and
> > > > > remove the covariate, then try the 3-dimensional case, etc
> > > > >
> > > > >
> > > > > let me know what you think.
> > > > >
> > > > > Best
> > > > > Havard
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On Thu, 2025-08-07 at 08:29 +0200, Leonardo Cefalo wrote:
> > > > > > Hi Håvard, sorry if the thread gets too long
> > > > > >
> > > > > > I tried checking for irregularities in the Q matrix (negative
> > > > > > entries
> > > > > > in
> > > > > > precision diagonal, misspecified graph Laplacian, negative
> > > > > > eigenvalues, etc)
> > > > > > but the assertion keeps failing. 
> > > > > > So I would revert to the Bartlett decomposition, which assigns the
> > > > > > chi^2 and
> > > > > > N(0,1) priors on the single entries of var-cov¹ (diagonal and off-
> > > > > > diagonal
> > > > > > respectively), and take a look at the original problem.
> > > > > >
> > > > > > In practice, simple models like the IMCAR work well. For the M-
> > > > > > model
> > > > > > extension
> > > > > > of the BYM (which is even the data generating process), instead,
> > > > > > posterior
> > > > > > modes of hyperparameters are far away from true values. Here I
> > > > > > compare
> > > > > > the
> > > > > > model with tight (default, logfile here) initial values and the
> > > > > > one
> > > > > > with
> > > > > > initial values of the var-cov matrix given by its posterior mode
> > > > > > of
> > > > > > the IMCAR
> > > > > > (guided, logfile here)
> > > > > >
> > > > > >                true default  guided
> > > > > > logit.phi1   0.4055 -2.1503  0.5872
> > > > > > logit.phi2  -0.4055  1.3941  0.8343
> > > > > > logit.phi3   2.1972 -0.0176  1.7773
> > > > > > logit.phi4   0.8473 -0.4388  0.3355
> > > > > > diag.N1     -0.3567  0.7254 -0.2459
> > > > > > diag.N2     -0.7351  0.2247 -0.6779
> > > > > > diag.N3     -0.5645  0.4917 -0.2535
> > > > > > diag.N4     -0.9436 -0.3943 -0.7143
> > > > > > no.diag.N21  0.6930 -0.8915  1.0926
> > > > > > no.diag.N31  0.3920  0.1517  0.6447
> > > > > > no.diag.N41  0.2100 -1.8708  0.3429
> > > > > > no.diag.N32  0.5280  0.7967 -0.1954
> > > > > > no.diag.N42  0.1980  0.4767 -0.2799
> > > > > > no.diag.N43  0.3360  0.6485  0.3601
> > > > > >
> > > > > > Yet it seems the ones at the center are the actual posterior
> > > > > > modes, as
> > > > > > the
> > > > > > joint posterior (is the joint posterior retrieved from here?) is
> > > > > > higher for
> > > > > > the model with default initials (first below) than the IMCAR-based
> > > > > > one
> > > > > > (second
> > > > > > below)
> > > > > > > mod.MMBYM$misc$configs$config[[mode.idx]]$log.posterior
> > > > > > [1] -5.649018
> > > > > > > mod.MMBYM.guided$misc$configs$config[[mode.idx.guided]]$log.post
> > > > > > > erio
> > > > > > > r
> > > > > > [1] -10.03069
> > > > > >
> > > > > > The thing I do not understand is: how come posterior modes are so
> > > > > > far
> > > > > > away
> > > > > > from true values? For any reason, R code is attached (also on
> > > > > > github
> > > > > > if
> > > > > > preferred).
> > > > > >
> > > > > > Thank you in advance
> > > > > > Kindest regards, 
> > > > > > LC 
> > > > > >
> > > > > > --
> > > > > >
> > > > > > ¹I notice my previous statement "there was no Jacobian to be

> > > > > > computed
> > > > > > since
> > > > > > theta included the log-sqrt of diagonal elements in the Bartlett
> > > > > > factor" was
> > > > > > ambiguous indeed, as it simply meant that in the model, the prior
> > > > > > is
> > > > > > written
> > > > > > on the scalar entries of the Bartlett triangular factor and the
> > > > > > only
> > > > > > change of
> > > > > > variable needed is from the diagonal entries to their logarithms,
> > > > > > but
> > > > > > of
> > > > > > course dwish = \pi(vcov) = \pi( factor ) * |J(factor-->vcov)|, and
> > > > > > \pi(vcov)
> > > > > > is not directly modelled
> > > > > >
> > > > > > Il giorno lun 4 ago 2025 alle ore 15:20 Håvard Rue
> > > > > > <hr...@r-inla.org>
> > > > > > ha
> > > > > > scritto:
> > > > > > > if you set
> > > > > > >
> > > > > > > control.inla=list(cmin=0.0001)
> > > > > > >
> > > > > > > and still get the error, there is an error in the other part of
> > > > > > > you
> > > > > > > Q-matrix
> > > > > > > construction...
> > > > > > >
> > > > > > > On Mon, 2025-08-04 at 15:13 +0200, Leonardo Cefalo wrote:
> > > > > > > > I'm on 2025.06.22-1 INLA version and R 4.5.0 (Windows)
> > > > > > > >
> > > > > > > > Il giorno lun 4 ago 2025 alle ore 15:07 Håvard Rue
> > > > > > > > <hr...@r-inla.org> ha
> > > > > > > > scritto:
> > > > > > > > >
> > > > > > > > > which version of R and INLA are you on ?
> > > > > > > > >
> > > > > > > > > On Mon, 2025-08-04 at 15:03 +0200, Leonardo Cefalo wrote:
> > > > > > > > > > That is the puzzling thing, I ran with cmin=0, I also
> > > > > > > > > > tried
> > > > > > > > > > updating
> > > > > > > > > > INLA
> > > > > > > > > > and
> > > > > > > > > > it keeps failing
> > > > > > > > > >
> > > > > > > > > > Il giorno lun 4 ago 2025 alle ore 13:57 Havard Rue
> > > > > > > > > > <hr...@r-inla.org>
> > > > > > > > > > ha
> > > > > > > > > > scritto:
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Then cmin should fix it 
> > > > > > > > > > >
> > > > > > > > > > > -- 
> > > > > > > > > > > Håvard Rue 
> > > > > > > > > > > hr...@r-inla.org
> > > > > > > > > > > On 4 Aug 2025 at 13:27 +0200, Leonardo Cefalo
> > > > > > > > > > > <leonardo...@gmail.com>,
> > > > > > > > > > > wrote:
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > I have just added in inla.rgeneric.IMCAR a check that
> > > > > > > > > > > > all
> > > > > > > > > > > > eigen(Sigma)$values >0  and Sigma is exactly
> > > > > > > > > > > > symmetric; it
> > > > > > > > > > > > never
> > > > > > > > > > > > gets
> > > > > > > > > > > > triggered. Indeed, at the last iteration before
> > > > > > > > > > > > crashing I
> > > > > > > > > > > > have
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > maxld= -3165.3201 fn=289 theta= 0.1983 0.5899 0.3253
> > > > > > > > > > > > 0.1225 2.5544
> > > > > > > > > > > > 1.2006
> > > > > > > > > > > > 0.7582 0.9577 0.2437 1.7828 [2.79, 0.029]
> > > > > > > > > > > > Assertion failed: arg->Q->a[0] >= 0.0, file tabulate-
> > > > > > > > > > > > Qfunc.c, line
> > > > > > > > > > > > 117
> > > > > > > > > > > > ...
> > > > > > > > > > > >
> > > > > > > > > > > > > theta <-  c(0.1983, 0.5899, 0.3253, 0.1225, 2.5544,
> > > > > > > > > > > > > 1.2006,
> > > > > > > > > > > > > 0.7582,
> > > > > > > > > > > > > 0.9577, 0.2437, 1.7828)
> > > > > > > > > > > > > vSigma <- theta2vcov(theta)
> > > > > > > > > > > > > Sigma <- array(0, dim=c(k,k))
> > > > > > > > > > > > > Sigma[lower.tri(Sigma)] <- vSigma[-c(1:k)]
> > > > > > > > > > > > > Sigma <- Sigma + t(Sigma) + diag(vSigma[c(1:k)])
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > > Sigma
> > > > > > > > > > > >           [,1]      [,2]      [,3]      [,4]
> > > > > > > > > > > > [1,] 1.4867611 1.8821372 0.9069504 0.4988167
> > > > > > > > > > > > [2,] 1.8821372 3.2537234 1.1120917 0.2472151
> > > > > > > > > > > > [3,] 0.9069504 1.1120917 1.9166905 1.1143167
> > > > > > > > > > > > [4,] 0.4988167 0.2472151 1.1143167 1.2776213
> > > > > > > > > > > > > eigen(Sigma)$values
> > > > > > > > > > > > [1] 5.2995134 2.0281162 0.4209165 0.1862501
> > > > > > > > > > > >
> > > > > > > > > > > > Which looks OK
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Il giorno lun 4 ago 2025 alle ore 12:56 Håvard Rue
> > > > > > > > > > > > <hr...@r-inla.org>
> > > > > > > > > > > > ha
> > > > > > > > > > > > scritto:
> > > > > > > > > > > > >
> > > > > > > > > > > > > could you also check that the cov.mat you create
> > > > > > > > > > > > > internally in
> > > > > > > > > > > > > 'IMCAR'
> > > > > > > > > > > > > is
> > > > > > > > > > > > > sym.pos.def ? quite surprisingly, the smallest
> > > > > > > > > > > > > eigenvalue are
> > > > > > > > > > > > > easily
> > > > > > > > > > > > > negative...
> > > > > > > > > > > > > its a numerical error
> > > > > > > > > > > > >
> > > > > > > > > > > > > On Mon, 2025-08-04 at 12:09 +0200, Leonardo Cefalo
> > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > Dear Håvard, thank you so much for your patience 
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > I tried to compute the Jacobian of theta -->
> > > > > > > > > > > > > > varcov
> > > > > > > > > > > > > > matrix
> > > > > > > > > > > > > > automatically with
> > > > > > > > > > > > > > no intermediate passages nor manual calculations;
> > > > > > > > > > > > > > now
> > > > > > > > > > > > > > log.prior
> > > > > > > > > > > > > > argument is
> > > > > > > > > > > > > >           val <-  log(MCMCpack::dwish(param$Sigma,
> > > > > > > > > > > > > > S=scale.fac
> > > > > > > > > > > > > > %*%
> > > > > > > > > > > > > > t(scale.fac), v = df)) ##prior
> > > > > > > > > > > > > >       val <- val +
> > > > > > > > > > > > > > log(abs(det(numDeriv::jacobian(theta2vcov,
> > > > > > > > > > > > > > theta))))
> > > > > > > > > > > > > > ##change of variable
> > > > > > > > > > > > > > where theta is the vector of the log-standard
> > > > > > > > > > > > > > deviations and
> > > > > > > > > > > > > > logit-
> > > > > > > > > > > > > > correlations, scale.fac for now is diag(k);
> > > > > > > > > > > > > > theta2vcov computes
> > > > > > > > > > > > > > the
> > > > > > > > > > > > > > half-vec
> > > > > > > > > > > > > > of the variance-covariance matrix
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Yet, with the following command (renamed function
> > > > > > > > > > > > > > inla.IMCAR.Bartlett
> > > > > > > > > > > > > > -->
> > > > > > > > > > > > > > inla.IMCAR as Bartlett decomposition is set aside
> > > > > > > > > > > > > > for
> > > > > > > > > > > > > > the
> > > > > > > > > > > > > > moment)

> > > > > > > > > > > > > >
> > > > > > > > > > > > > > inla( Y ~ 1 + X +
> > > > > > > > > > > > > >     f(ID, model = inla.IMCAR(k = 4, W = W, df=8,
> > > > > > > > > > > > > > Bartlett =
> > > > > > > > > > > > > > FALSE),
> > > > > > > > > > > > > >       extraconstr = list(A=constr.BYM$A[,-

> > > > > > > > > > > > > > c(1:(4*n))],
> > > > > > > > > > > > > > e =
> > > > > > > > > > > > > > c(0,0,0,0))
> > > > > > > > > > > > > > ),
> > > > > > > > > > > > > >   family = "poisson", data = dd, num.threads = 1,
> > > > > > > > > > > > > >   control.compute = list(internal.opt = F, cpo =
> > > > > > > > > > > > > > T,
> > > > > > > > > > > > > > waic = T,
> > > > > > > > > > > > > > config =
> > > > > > > > > > > > > > T),
> > > > > > > > > > > > > >   verbose = T)
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > the system fails, and I don't know how to
> > > > > > > > > > > > > > interpret the
> > > > > > > > > > > > > > failure.
> > > > > > > > > > > > > > This
> > > > > > > > > > > > > > also
> > > > > > > > > > > > > > happens with different degrees of freedom on the
> > > > > > > > > > > > > > Wishart prior
> > > > > > > > > > > > > > Iter=6 |grad|=68.2 |dx|=0.211 |best.dx|=0.211
> > > > > > > > > > > > > > |df|=16.6
> > > > > > > > > > > > > > |best.df|=16.6
> > > > > > > > > > > > > > maxld= -3165.4735 fn=275 theta= 0.1990 0.5865
> > > > > > > > > > > > > > 0.3244
> > > > > > > > > > > > > > 0.1259
> > > > > > > > > > > > > > 2.5548
> > > > > > > > > > > > > > 1.2009
> > > > > > > > > > > > > > 0.7584 0.9584 0.2435 1.7831 [2.79, 0.043]
> > > > > > > > > > > > > > maxld= -3165.4310 fn=276 theta= 0.1995 0.5874
> > > > > > > > > > > > > > 0.3236
> > > > > > > > > > > > > > 0.1256
> > > > > > > > > > > > > > 2.5584
> > > > > > > > > > > > > > 1.2018
> > > > > > > > > > > > > > 0.7596 0.9583 0.2434 1.7859 [2.79, 0.043]
> > > > > > > > > > > > > > maxld= -3165.3831 fn=280 theta= 0.1995 0.5878
> > > > > > > > > > > > > > 0.3285
> > > > > > > > > > > > > > 0.1276
> > > > > > > > > > > > > > 2.5549
> > > > > > > > > > > > > > 1.2010
> > > > > > > > > > > > > > 0.7591 0.9603 0.2432 1.7835 [2.79, 0.043]
> > > > > > > > > > > > > > maxld= -3165.3540 fn=287 theta= 0.1976 0.5833
> > > > > > > > > > > > > > 0.3264
> > > > > > > > > > > > > > 0.1233
> > > > > > > > > > > > > > 2.5552
> > > > > > > > > > > > > > 1.2011
> > > > > > > > > > > > > > 0.7585 0.9588 0.2428 1.7838 [2.79, 0.043]
> > > > > > > > > > > > > > maxld= -3165.3329 fn=288 theta= 0.1943 0.5872
> > > > > > > > > > > > > > 0.3240
> > > > > > > > > > > > > > 0.1274
> > > > > > > > > > > > > > 2.5551
> > > > > > > > > > > > > > 1.2008
> > > > > > > > > > > > > > 0.7584 0.9586 0.2436 1.7834 [2.79, 0.043]
> > > > > > > > > > > > > > maxld= -3165.3201 fn=289 theta= 0.1983 0.5899
> > > > > > > > > > > > > > 0.3253
> > > > > > > > > > > > > > 0.1225
> > > > > > > > > > > > > > 2.5544
> > > > > > > > > > > > > > 1.2006
> > > > > > > > > > > > > > 0.7582 0.9577 0.2437 1.7828 [2.79, 0.043]
> > > > > > > > > > > > > > Assertion failed: arg->Q->a[0] >= 0.0, file
> > > > > > > > > > > > > > tabulate-
> > > > > > > > > > > > > > Qfunc.c,
> > > > > > > > > > > > > > line
> > > > > > > > > > > > > > 117
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Could I please ask what is meant by arg-> Q-> a[0]
> > > > > > > > > > > > > > ?
> > > > > > > > > > > > > > If it has
> > > > > > > > > > > > > > to
> > > > > > > > > > > > > > do
> > > > > > > > > > > > > > with the
> > > > > > > > > > > > > > precision of the latent field, I checked all
> > > > > > > > > > > > > > diagonal
> > > > > > > > > > > > > > elements
> > > > > > > > > > > > > > of
> > > > > > > > > > > > > > solve(Sigma)
> > > > > > > > > > > > > > are > 0 a the last configuration of theta printed
> > > > > > > > > > > > > > above
> > > > > > > > > > > > > > LC
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Il giorno ven 1 ago 2025 alle ore 20:39 Håvard Rue
> > > > > > > > > > > > > > <hr...@r-inla.org>
> > > > > > > > > > > > > > ha
> > > > > > > > > > > > > > scritto:
> > > > > > > > > > > > > > > you need to do
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > m <- M <- 3
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > my bad
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > On Fri, 2025-08-01 at 20:30 +0200, Håvard Rue
> > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > this code makes it more explicite...
> > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > On Fri, 2025-08-01 at 20:13 +0200, Håvard Rue
> > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > no no, you need
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > jacobian(new.varcov, x = theta)
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > On Fri, 2025-08-01 at 19:56 +0200, Leonardo
> > > > > > > > > > > > > > > > > Cefalo
> > > > > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > > > I ran into crash by using jacobian(varcov,
> > > > > > > > > > > > > > > > > > x =
> > > > > > > > > > > > > > > > > > c(stdev,
> > > > > > > > > > > > > > > > > > correlations))
> > > > > > > > > > > > > > > > > > but
> > > > > > > > > > > > > > > > > > let
> > > > > > > > > > > > > > > > > > me have a double-check
> > > > > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > > > > Il Ven 1 Ago 2025, 19:36 Håvard Rue
> > > > > > > > > > > > > > > > > > <hr...@r-inla.org>
> > > > > > > > > > > > > > > > > > ha
> > > > > > > > > > > > > > > > > > scritto:
> > > > > > > > > > > > > > > > > > >
Reply all
Reply to author
Forward
0 new messages