Re: help with somewhat inconsistent results between Gamma GLMM in Nimble vs lme4::glmer

92 views
Skip to first unread message

Chris Paciorek

unread,
Aug 9, 2023, 1:50:57 PM8/9/23
to Tiago Marques, nimble-users
Hi Tiago,

For a gamma distribution, the variance is the shape times squared scale, which is equivalent to the mean (shape times scale) times the scale. So the variance scales with the mean.

But in your code, the variance scales with squared mean  (as one can see if you multiply shape times scale). I think you want

crate[i] ~ dgamma(shape=mean[i]*disp, scale=1/disp)

where `disp` is the inverse scale.

To check this you may want to look at some refs on gamma regression to see if it matches what I have above in terms of how the mean relates to the parameters of the gamma distribution.

Given that it's not clear to me that a uniform prior on `disp` is what you want. That said, for this problem, I'm not sure offhand where it's best to have uniformity to be analogous to having uniform priors on standard deviations as a decent non-informative choice.

-chris

On Wed, Aug 9, 2023 at 10:40 AM Tiago Marques <tiagoand...@gmail.com> wrote:
Dear NIMBLE users group,

I have implemented a Bayesian Gamma GLMM equivalent to this lme4 glmer model:

glmer(rate~(1|location)+(1|year),data=data,family=Gamma(link="log"))

so essentially, an intercept only + two random effects model, one associated with location and one associated with time (both factors). Observations made at a number of random times and places.

I have created the following nimble model

GLMMcode <- nimbleCode({
  # the overall intercept
  beta0 ~ dnorm(0,sd=10)
  # random effect standard deviation associated with location, a uniform, might change this to be something else latter
  sigmal_RE ~ dunif(0, 2)
  # random effect standard deviation associated with year, a uniform, might change this to be something else latter
  sigmay_RE ~ dunif(0, 2)
  # the gamma dispersion (or variance - see commented parametrization 1) parameter, a uniform, might change this to be something else latter
  #dispersion ~ dunif(0, 10)
  disp ~ dunif(0, 10)
  #get year random effects
  for(yy in 1:nyears){
    #REy[yy] ~ dnorm(0, sd = sigmay_RE)
    REy[yy] ~ dnorm(0, sd = 1)
  }  
  #get location random effects
  for(ll in 1:nlocs){
    #REl[ll] ~ dnorm(0, sd = sigmal_RE)
    REl[ll] ~ dnorm(0, sd = 1)
  }  
  for (i in 1:N){
    #get the linear predictor, consider a log link function
    #log(mean[i]) <- beta0 + REy[year[i]] + REl[loc[i]]
    # now Using decentered parametrization, a suggestion by Ben Augustine
    log(mean[i]) <- beta0 + REy[year[i]]*sigmay_RE + REl[loc[i]]*sigmal_RE
    #parametrization 1
    # crate[i] ~ dgamma(shape=(mean[i]^2)/disp,scale=disp/mean[i])
    #parametrization 2
    crate[i] ~ dgamma(shape=1/disp,scale=mean[i]*disp)
  }
})

This model runs just fine (fully running code is hosted here https://github.com/TiagoAMarques/Report4BB/tree/main/LookingAtNimble if someone feels especially keen, takes less than a minute to run) and there seem to be no issues with mixing nor convergence (right? plot here https://github.com/TiagoAMarques/Report4BB/blob/main/LookingAtNimble/TracePlotsGammaGLMM.jpeg).

However, I find that the values for the estimated standard deviation on the random effects, in particular for location, and even that of the overall mean in fact, are sufficiently different from those in glmer to make me think there might be something off. I know that Bayes and frequentist approaches don't necessarily need to result in similar estimates, but I am using fairly uninformative priors, so was expecting things to be more alike.

So the question are:

1. do people with more experience at Bayesian implementations agree that this is off enough from the glmer results to be strange?

2. If so, any suggestions to what this might mean? In particular I wonder (a) might there be something off in my Gamma regression parameterization or (2) might there be something with the priors used that I'm missing? 

PS - I am really a Bayes newbie... so all nonsense is mine, but - because an opportunity to praise colleagues for sharing their time with one self - I'd like to thank Ben Augustine and Perry de Valpine have already wasted their precious time trying to help me with very useful comments on earlier implementations of the Bugs code. Also thanks a lot to Ben Bolker for setting me on this Bayesian track to solve an issue with predictions for new unobserved levels of random effects in a GLMM via an answer on stack overflow (https://stats.stackexchange.com/questions/616697/how-to-estimate-precision-on-a-prediction-for-a-glmm-for-an-unobserved-level-of/).

--
You received this message because you are subscribed to the Google Groups "nimble-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nimble-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/nimble-users/a10286e2-61cb-441b-99ef-1abb4c74d863n%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages