stan_glmer model with Gamma family not converging.

625 views
Skip to first unread message

AM Madrigal

unread,
Feb 15, 2016, 10:17:48 PM2/15/16
to Stan users mailing list
Hello all,

I’m trying to run a model using rstan arm, but I am having some problems and unsure where can I find more advise. I’ve googled questions and found some directions, but haven’t found answers. I’ve looked into the vignettes, but the troubleshooting didn’t cover my questions. Is there any reference I can consult?

I usually would set models in bugs, rjags, etc.. But given that now I have some time constraints and I found this new package that seems to cover it all up, I thought I should give it a try.

I have a positive response (revenue), so I was trying to run a model with a gamma family, Model is failing to converge and taking long and unsure how to move forward. 

My model is somehow of the form:

totalRevenue (y) ~    X1  + X2  + X3 + (X2|country)+(X2|group)

Questions /comments.
  • X1, X2, and X3 continous. I’ve tried to re-scale (centred and divided by sigma) them before running model as I was getting messages that they were in different scales.
  • Country and group are factors.
  • I tried changing the algorithm to “meanfield” and “fullrank” to test models but output seems quite random and it sometimes comes out with errors, sometimes it just goes quiet with no feedback and doesn’t create model, sometimes it gives some answers, mainly with warnings of not coverging and when it doesn’t complain I am unsure how to check convergence as there is no Rhat statistic…So have kept it again to the MCMC, which takes long so I don’t get immediate feedback , but at least when it doesn’t collapse, I seem to be able to assess convergence or use the shinystan (which is really good!! Congrats!)… 
  • I tried centering the response (so I have positive and negatives) and use a gaussian and I do get some results converging after some time, however, the ppcheck shows that the normal doesn’t really fit the response which is quite skewed.
  • I tried a family=Gamma, which seems conceptually a better option for my response. It doesn’t converge.
    • I tried adjusting the link to see if that helped, but it drops the “link= “ parameter fromt he function… I followed the usual glm specification of the family-link, but it doesn’t work, do you know how do I specify different links (e.g. Identity/ inverse / log)
    • I have tried adapt_delta to higher values like 0.98 / 0.99, and still not convergence
    • There was a comment in one of the feedbacks about increasing the value for tree depth, but I haven’t find exactly how I pass that to the stan_glmer () .. I tried: "control = list(max_treedepth = 20)”, but it seems to ignore / drop it. Any ideas in how to make that go through?
    • I’ve tried adding some priors to the coefficients , but maybe I need some more appropriate / vaguer ones, as when I add, the process goes into limbo. So for the moment leaving that out.
      • in the priors: when writing BUGs-type code, one usually specify one distribution for each of the parameters... in here it all seems to be collapsed by the "priors" option. I probably don't need to add different priors for each coefficient and I'm guessing priors for variances are ok to leave as "default"
      • I believe that adding some priors even if not too informative might help convergence, however if not sure of the scale I would get from my coefficients, don't want to misleadingly set a "wrong" prior focusing in the wrong range.
      • Basic question: when specifying the priors in rstanarm, does it follow usual R convention where the scale is the standard deviation? or does it refer to "precisions" (1/sigma2). 
Any ideas / help / light  / links will be appreciated.

The code (with some commented lines is something like:
datBNscGamma<-datBN %>% mutate_each_(funs(scale),vars=c("totaladspend","Week0dn","onlinePC1"))

test_model2 <- stan_glmer(
    totalRevenue ~    Week0dn + totaladspend + onlinePC1+ (totaladspend|country) + (totaladspend|filmCluster),
    data = datBNscGamma,
             family = Gamma, link="log", #links inverse, identity and log (links get ignored!!)
              #      algorithm = "meanfield",#"fullrank",
#            prior = normal(0,8),
adapt_delta=0.98, control = list(max_treedepth = 20), ###max_treedepth gets ignored
#             QR=TRUE,
             chains = 4, cores = 1,  iter = 8000)

Thanks so much.

Ana.

Jonah Gabry

unread,
Feb 15, 2016, 11:08:25 PM2/15/16
to Stan users mailing list
Hi Ana, 

Sorry you're running into these issues. Some comments and questions below:


On Monday, February 15, 2016 at 10:17:48 PM UTC-5, AM Madrigal wrote:
Hello all,

I’m trying to run a model using rstan arm, but I am having some problems and unsure where can I find more advise.

You found the right place!


Questions /comments.
  • I tried changing the algorithm to “meanfield” and “fullrank” to test models but output seems quite random and it sometimes comes out with errors, sometimes it just goes quiet with no feedback and doesn’t create model, sometimes it gives some answers, mainly with warnings of not coverging and when it doesn’t complain I am unsure how to check convergence as there is no Rhat statistic…So have kept it again to the MCMC, which takes long so I don’t get immediate feedback , but at least when it doesn’t collapse, I seem to be able to assess convergence or use the shinystan (which is really good!! Congrats!)… 
When you say it "goes quiet with no feedback and doesn't create the model" do you mean nothing at all happens when you run the code? When it errors, are the messages about "numeric overflow"? Those do seem to happen occasionally with the variational algorithms. If it does converge it should say "MEDIAN ELBO CONVERGED", although that's not the same sort of convergence as for MCMC, which is why there is no Rhat statistic. That said, there has been some discussion of developing something similar to Rhat, but we haven't gotten there yet. 

  • I tried centering the response (so I have positive and negatives) and use a gaussian and I do get some results converging after some time, however, the ppcheck shows that the normal doesn’t really fit the response which is quite skewed.
Have you tried a log transformation of the positive response in combination with a gaussian likelihood? 

  • I tried a family=Gamma, which seems conceptually a better option for my response. It doesn’t converge.
    • I tried adjusting the link to see if that helped, but it drops the “link= “ parameter fromt he function… I followed the usual glm specification of the family-link, but it doesn’t work, do you know how do I specify different links (e.g. Identity/ inverse / log)
The link parameter should work the way you describe. How do you know it's ignoring it? I just tried out a few stan_glmer models with family = Gamma(link = "inverse") and family = Gamma(link = "log") and they seem to be ok.  
    • I have tried adapt_delta to higher values like 0.98 / 0.99, and still not convergence
By not convergence do you mean you're getting warnings about divergences? Or bad looking trace plots / large Rhat values?
    • There was a comment in one of the feedbacks about increasing the value for tree depth, but I haven’t find exactly how I pass that to the stan_glmer () .. I tried: "control = list(max_treedepth = 20)”, but it seems to ignore / drop it. Any ideas in how to make that go through?
control = list(max_treedepth = 20) is the right way to do it. I just double checked and I think it's working properly. If you specify control = list(max_treedepth = 20) and then run a model, what it does it say if you then look at model$stanfit@stan_args[[1]]$control, which will pull out the internal control arguments used? 
    • I’ve tried adding some priors to the coefficients , but maybe I need some more appropriate / vaguer ones, as when I add, the process goes into limbo. So for the moment leaving that out.
      • in the priors: when writing BUGs-type code, one usually specify one distribution for each of the parameters... in here it all seems to be collapsed by the "priors" option. I probably don't need to add different priors for each coefficient and I'm guessing priors for variances are ok to leave as "default"
In rstanarm, the location and scale arguments for the different priors (e.g. normal, student_t, etc) can be vectors if you want to provide different values for the different coefficients. If they're just scalars then the values are replicated to the appropriate length. 
      • Basic question: when specifying the priors in rstanarm, does it follow usual R convention where the scale is the standard deviation? or does it refer to "precisions" (1/sigma2). 
Yes, scale is the standard deviation.


Also, which version of rstanarm are you using? There is a new version on CRAN that we haven't announced yet because the binaries might still be building, but I think for now at least the RStudio CRAN mirror it should be available. So I would definitely update to the most recent version if you haven't already. 


Best,

Jonah

AM Madrigal

unread,
Feb 15, 2016, 11:58:34 PM2/15/16
to Stan users mailing list
Thanks very much Jonah,
Encouraging to receive some advice!! Some replies below.


On Monday, February 15, 2016 at 10:08:25 PM UTC-6, Jonah Gabry wrote:
Hi Ana, 

Sorry you're running into these issues. Some comments and questions below:


On Monday, February 15, 2016 at 10:17:48 PM UTC-5, AM Madrigal wrote:
Hello all,

I’m trying to run a model using rstan arm, but I am having some problems and unsure where can I find more advise.

You found the right place!
Fantastic! 


Questions /comments.
  • I tried changing the algorithm to “meanfield” and “fullrank” to test models but output seems quite random and it sometimes comes out with errors, sometimes it just goes quiet with no feedback and doesn’t create model, sometimes it gives some answers, mainly with warnings of not coverging and when it doesn’t complain I am unsure how to check convergence as there is no Rhat statistic…So have kept it again to the MCMC, which takes long so I don’t get immediate feedback , but at least when it doesn’t collapse, I seem to be able to assess convergence or use the shinystan (which is really good!! Congrats!)… 
When you say it "goes quiet with no feedback and doesn't create the model" do you mean nothing at all happens when you run the code? When it errors, are the messages about "numeric overflow"? Those do seem to happen occasionally with the variational algorithms. If it does converge it should say "MEDIAN ELBO CONVERGED", although that's not the same sort of convergence as for MCMC, which is why there is no Rhat statistic. That said, there has been some discussion of developing something similar to Rhat, but we haven't gotten there yet. 

I've been playing with options all last week and don't recall which messages I got under which options. I'll try to reproduce some to give a more informed feedback. 
In the meantime, I tried running the same model as above just changing the algorithm to both flunk and minefield and it does come back with the "numerical overflow" message. What does that mean?
In some combinations, it would run and say that ELBO didn't converge.
Is few combinations (I believe running gaussian) I did get convergence, but then couldn't really get my head around how to judge the models.
I some situations, I would try running the model and yes, it went quiet in the sense that it will return to the " > " with no model created and no other message [[ One of my hypothesis was that some other packages installed could be masking out functions(?)... I tried at some point running a simple frequentists glmer to compare (from lme4) and sometimes even that would "go quiet" too... if I manage to reproduce I'll let you know]]
  • I tried centering the response (so I have positive and negatives) and use a gaussian and I do get some results converging after some time, however, the ppcheck shows that the normal doesn’t really fit the response which is quite skewed.
Have you tried a log transformation of the positive response in combination with a gaussian likelihood? 

That is a great idea, so I create a log-normal and that might work fine. I'll try and come back to you. I have actually now put it to run and I'll wait and see if that converges.


  • I tried a family=Gamma, which seems conceptually a better option for my response. It doesn’t converge.
    • I tried adjusting the link to see if that helped, but it drops the “link= “ parameter fromt he function… I followed the usual glm specification of the family-link, but it doesn’t work, do you know how do I specify different links (e.g. Identity/ inverse / log)
The link parameter should work the way you describe. How do you know it's ignoring it? I just tried out a few stan_glmer models with family = Gamma(link = "inverse") and family = Gamma(link = "log") and they seem to be ok.  

My mistake, I was using the spec as in glm() where you actually use 'family=Gamma, link="log" ' and at the end of the process it would tell me it had dropped the "link=" parameter... but I see I was specifying it wronglyand I should have used the Gamma(link =" ") syntax. So I guess it was running it with the default link (inverse?) when it failed to converge.
    • I have tried adapt_delta to higher values like 0.98 / 0.99, and still not convergence
By not convergence do you mean you're getting warnings about divergences? Or bad looking trace plots / large Rhat values?

I mean getting messages at the end of the process messages of something like " chains did not converge. do not analyse the results!"  
    • There was a comment in one of the feedbacks about increasing the value for tree depth, but I haven’t find exactly how I pass that to the stan_glmer () .. I tried: "control = list(max_treedepth = 20)”, but it seems to ignore / drop it. Any ideas in how to make that go through?
control = list(max_treedepth = 20) is the right way to do it. I just double checked and I think it's working properly. If you specify control = list(max_treedepth = 20) and then run a model, what it does it say if you then look at model$stanfit@stan_args[[1]]$control, which will pull out the internal control arguments used? 

I'll look into this and come back to you. From memory, at the same time that simulation came with the warning that it was ignoring the "link= .." instruction, it would say something similar about the max_treedepth. 
    • I’ve tried adding some priors to the coefficients , but maybe I need some more appropriate / vaguer ones, as when I add, the process goes into limbo. So for the moment leaving that out.
      • in the priors: when writing BUGs-type code, one usually specify one distribution for each of the parameters... in here it all seems to be collapsed by the "priors" option. I probably don't need to add different priors for each coefficient and I'm guessing priors for variances are ok to leave as "default"
In rstanarm, the location and scale arguments for the different priors (e.g. normal, student_t, etc) can be vectors if you want to provide different values for the different coefficients. If they're just scalars then the values are replicated to the appropriate length. 

Thanks for tip, 
      • Basic question: when specifying the priors in rstanarm, does it follow usual R convention where the scale is the standard deviation? or does it refer to "precisions" (1/sigma2). 
Yes, scale is the standard deviation.


Also, which version of rstanarm are you using? There is a new version on CRAN that we haven't announced yet because the binaries might still be building, but I think for now at least the RStudio CRAN mirror it should be available. So I would definitely update to the most recent version if you haven't already. 

I updated last week all from R version to packages. It says:

> library(rstanarm)
Loading required package: Rcpp
rstanarm (Version 2.9.0-1, packaged: 2016-01-09 18:39:57 UTC)
 


Best,

Jonah

Jonah Gabry

unread,
Feb 16, 2016, 12:42:36 AM2/16/16
to Stan users mailing list
I believe there's an ever newer version now up on CRAN, so I would update again if you can. I recommend installing the update in a new R session so there aren't any objects created by the older version in the workspace. A few more comments below:


On Monday, February 15, 2016 at 11:58:34 PM UTC-5, AM Madrigal wrote:
In the meantime, I tried running the same model as above just changing the algorithm to both flunk and minefield and it does come back with the "numerical overflow" message. What does that mean?

It means that some value became too large. It's just a computational issue due to arithmetic with finite bit representations of numbers. The gamma distribution can have this issue sometimes, especially in combination with the variational approximations. Maybe Dustin or Alp (they wrote those algorithms) will have a better suggestion, but I'd start with using more informative priors if possible.

 I tried at some point running a simple frequentists glmer to compare (from lme4) and sometimes even that would "go quiet" too... if I manage to reproduce I'll let you know

Thanks. That sounds like a more general issue than rstanarm, but I'm not sure what the source of it is.
  • I tried centering the response (so I have positive and negatives) and use a gaussian and I do get some results converging after some time, however, the ppcheck shows that the normal doesn’t really fit the response which is quite skewed.
Have you tried a log transformation of the positive response in combination with a gaussian likelihood? 

That is a great idea, so I create a log-normal and that might work fine. I'll try and come back to you. I have actually now put it to run and I'll wait and see if that converges.

Ok great. 
 
 So I guess it was running it with the default link (inverse?) when it failed to converge.

Yeah, I believe the default link for Gamma is inverse.

By not convergence do you mean you're getting warnings about divergences? Or bad looking trace plots / large Rhat values?

I mean getting messages at the end of the process messages of something like " chains did not converge. do not analyse the results!"  

Ok yeah, that warning is triggered by Rhat values greater than 1.1. You can see the Rhat values (among other things) for all parameters using the summary function, or you can get just the Rhat values with summary(model)[, "Rhat"]. If they're not far from 1.1 then you can try running for more iterations. But if they're far off then there are probably other issue with the model.

I'll look into this and come back to you. From memory, at the same time that simulation came with the warning that it was ignoring the "link= .." instruction, it would say something similar about the max_treedepth. 

Ok thanks. I think adding control = list(max_treedepth = 20) should work, but it's probably not necessary.

Jonah

AM Madrigal

unread,
Feb 16, 2016, 12:45:44 AM2/16/16
to Stan users mailing list


Have you tried a log transformation of the positive response in combination with a gaussian likelihood? 

That is a great idea, so I create a log-normal and that might work fine. I'll try and come back to you. I have actually now put it to run and I'll wait and see if that converges.

Thanks! Ran the log-normal combination and looks much more promising. Took about an hour and seem to be converging. It came with some warnings...

Warning messages:
1: There were 1 divergent transitions after warmup. Increasing adapt_delta above 0.98 may help. 
2: Examine the pairs() plot to diagnose sampling problems

What does it mean to have "X divergent transitions"? Is that over all chains? If so, having one over 8000 length chains doesn't seem dangerous (?)

What is the right use of the pairs() plot in this context(?) I tried directly using it as pairs(stan_glmermodel) and it returns errors, so assuming this is not the way.

Quick question in terms of goodness of fit and comparing models with different explanatory variables. I know there is no BIC / AIC measures to compare if a model with  / without a variable is "better" than other... do you have any ideas for that?


Ben Goodrich

unread,
Feb 16, 2016, 1:02:56 AM2/16/16
to Stan users mailing list
Hi Ana,


On Tuesday, February 16, 2016 at 12:45:44 AM UTC-5, AM Madrigal wrote:


Have you tried a log transformation of the positive response in combination with a gaussian likelihood? 

That is a great idea, so I create a log-normal and that might work fine. I'll try and come back to you. I have actually now put it to run and I'll wait and see if that converges.

Thanks! Ran the log-normal combination and looks much more promising. Took about an hour and seem to be converging. It came with some warnings...

Warning messages:
1: There were 1 divergent transitions after warmup. Increasing adapt_delta above 0.98 may help. 
2: Examine the pairs() plot to diagnose sampling problems

What does it mean to have "X divergent transitions"? Is that over all chains? If so, having one over 8000 length chains doesn't seem dangerous (?)

It is 1 out of 4000 (the warmup is ignored), but it is still not too dangerous. For an explanation, see

https://cran.r-project.org/web/packages/rstanarm/vignettes/rstanarm.html#divergent-transitions

and / or

https://github.com/stan-dev/cmdstan/releases/download/v2.9.0/cmdstan-guide-2.9.0.pdf

and search for the text n_divergent.
 
What is the right use of the pairs() plot in this context(?) I tried directly using it as pairs(stan_glmermodel) and it returns errors, so assuming this is not the way.

That is the right way. What were the errors?
 
Quick question in terms of goodness of fit and comparing models with different explanatory variables. I know there is no BIC / AIC measures to compare if a model with  / without a variable is "better" than other... do you have any ideas for that?

Our preferred measure is LOOIC (Leave-One-Out Information Criterion) which is conceptually similar to the AIC (but both are different than the BIC) but is better because the LOOIC allows for there to be prior information and posterior distributions that are not multivariate normal. Each of these information criteria is intended to estimate the expected log posterior density (ELPD) for a new observation. The compare() function can be used to compare different models (that have the same outcome, the same number of observations, and the same likelihood) on the ELPD criterion. The compare() function is used several times in the rstanarm vignettes and is based on this paper

http://arxiv.org/abs/1507.04544

Ben

Jonah Sol Gabry

unread,
Feb 16, 2016, 1:10:01 AM2/16/16
to stan-...@googlegroups.com
On Tue, Feb 16, 2016 at 12:45 AM, AM Madrigal <am.madr...@gmail.com> wrote:


Have you tried a log transformation of the positive response in combination with a gaussian likelihood? 

That is a great idea, so I create a log-normal and that might work fine. I'll try and come back to you. I have actually now put it to run and I'll wait and see if that converges.

Thanks! Ran the log-normal combination and looks much more promising. Took about an hour and seem to be converging. It came with some warnings...

Warning messages:
1: There were 1 divergent transitions after warmup. Increasing adapt_delta above 0.98 may help. 
2: Examine the pairs() plot to diagnose sampling problems

What does it mean to have "X divergent transitions"? Is that over all chains? If so, having one over 8000 length chains doesn't seem dangerous (?)
 
It has to due with the numerical error from approximating integrals. I wouldn't be too concerned by one in 8000 iterations, but if you get a decent number of them then that's a problem. Also, in most cases you can use a much smaller number of iterations. 8000 seems like a lot for this kind of model. Of course if it really does converge then having that many draws isn't actually a problem, just often unnecessary.

Quick question in terms of goodness of fit and comparing models with different explanatory variables. I know there is no BIC / AIC measures to compare if a model with  / without a variable is "better" than other... do you have any ideas for that?

For model fit I would start by taking a look at the posterior predictive checks. For comparing models, you can use the loo function in rstanarm, which uses the loo package to do a Bayesian approximation to leave one out cross validation. You can compare models on their estimated out of sample predictive accuracy. This has some similarities to AIC/BIC but it integrates over uncertainty in the parameters instead of using point estimates. There are a bunch of details on the help page for the loo function in rstanarm, as well as the loo and compare functions in the loo package. I would check those out.

Hope that helps,

Jonah


 

Ana Maria Madrigal

unread,
Feb 16, 2016, 1:10:14 AM2/16/16
to stan-...@googlegroups.com
Thanks Ben,

great suggestions about comparing models. Log-normal is still a bit out in terms of ppcheck of the response, but it is much better and at least it doesn't seem to be having the convergence issues as the Gamma did...  In terms of pairs()...


What is the right use of the pairs() plot in this context(?) I tried directly using it as pairs(stan_glmermodel) and it returns errors, so assuming this is not the way.

That is the right way. What were the errors?

Loading required namespace: KernSmooth
Error in check_pars(allpars, pars) : 
  no parameter b[(Intercept) filmCluster:1], b[(Intercept) filmCluster:2], b[(Intercept) filmCluster:3], ..... etcetera...

Jonah Sol Gabry

unread,
Feb 16, 2016, 1:12:36 AM2/16/16
to stan-...@googlegroups.com
On Tue, Feb 16, 2016 at 1:10 AM, Ana Maria Madrigal <am.madr...@gmail.com> wrote:

Loading required namespace: KernSmooth
Error in check_pars(allpars, pars) : 
  no parameter b[(Intercept) filmCluster:1], b[(Intercept) filmCluster:2], b[(Intercept) filmCluster:3], ..... etcetera...


I think that's a bug. I'll look into it.

Ana Maria Madrigal

unread,
Feb 16, 2016, 1:25:16 PM2/16/16
to stan-...@googlegroups.com
Hello guys, 
Im in the process of updating all packages to latest versions as you suggested, to make a clear run of dat and models. Will keep it to log-normal to avoid the Gamma issues and I will keep you posted if I get in any extra trouble or have any useful feedback to give.

I have an extra question that might not be related, because it is more about formulae setting, than convergence but here I go:  is there a way to get two hierarchies in the "random" part of the model... i.e. get estimates of coefficients within group within country? 

So substituting 
(totaladspend|country) + (totaladspend|filmCluster)

by some type of the interaction.

(totaladspend|country*filmCluster)

or something like that? haven't found reference online for this type of "complexity" when there are more than one hierarchies... 

any reference will be welcome, Thanks again.

--
You received this message because you are subscribed to a topic in the Google Groups "Stan users mailing list" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/stan-users/MWqlyt8hwGU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to stan-users+...@googlegroups.com.
To post to this group, send email to stan-...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Jonah Sol Gabry

unread,
Feb 16, 2016, 3:33:30 PM2/16/16
to stan-...@googlegroups.com
Hi Ana, rstanarm uses lme4's parser to parse the model formula (just the estimation algorithms differ) so you should be able to do anything that's possible to express using the lme4 syntax. I think something along the lines of what you're talking about can be done using ":" instead of "*" but I would double check the lme4 documentation to make sure it corresponds to what you're looking for. As an example, I think including something like 

(1|x1) + (1|x1:x2) 

in a model formula would give you intercepts varying by level of x1 as well as by level of x1 within x2.

Jonah

Ana Maria Madrigal

unread,
Feb 17, 2016, 7:07:48 PM2/17/16
to stan-...@googlegroups.com
Thanks,

That syntax seems to work fine.

Quick question. I have a factor with levels "A","B","C", but when I run the model, I get two coeffs (I guess  third one is taken as control), but output gives me "L" and "Q". I saw the same in another test I was running with another factor variable where two levels seem to be recoded.. Is this standard in rstan? If so, any way I can return to my original levels to identify which is which?

X1.L                 -0.01  0.28
X1.Q                  0.07  0.25

thanks again... I'm conscious this has now deviated a bit.

Jonah Sol Gabry

unread,
Feb 17, 2016, 7:39:10 PM2/17/16
to stan-...@googlegroups.com
Yeah the first factor level will be treated as the reference level, which should correspond to the behavior of typical R modeling functions like glm. The relevel function will let you change the reference level of a factor in your data frame if you want to interpret the coefficients relative to a different level than the first. 

I'm not sure why it's relabeling the levels with L and Q though. Would it be possible to send a reproducible example? It can be a toy model with fake data if that's simplest. 

Jonah

Ben Goodrich

unread,
Feb 17, 2016, 8:26:01 PM2/17/16
to Stan users mailing list
On Wednesday, February 17, 2016 at 7:07:48 PM UTC-5, AM Madrigal wrote:

Quick question. I have a factor with levels "A","B","C", but when I run the model, I get two coeffs (I guess  third one is taken as control), but output gives me "L" and "Q". I saw the same in another test I was running with another factor variable where two levels seem to be recoded.. Is this standard in rstan? If so, any way I can return to my original levels to identify which is which?

This is because one of the factors is an "ordered factor" in which case by default R uses whatever the contr.poly() function does when creating dummy variables, which have never understood nor seen in any research context. There are two and a half ways around this:
  1. Make the variable an unordered factor with g <- factor(g, ordered = FALSE) before running the model
  2. Call the following before running the model
    options(contrasts = c(unordered = 'contr.treatment',
                            ordered
    = 'contr.treatment'))
    which makes it subsequently operationalize dummy variables with "treatment" contrasts (which is the convention in the social sciences and elsewhere; basically the dummy variables are 1 if the observation falls in a category and 0 otherwise)
  3. Put the code from (2) into a file called .Rprofile (note the leading period) in your home directory in which case it will be executed each time you start R
Ben

Ana Maria Madrigal

unread,
Feb 18, 2016, 1:10:51 AM2/18/16
to stan-...@googlegroups.com
Thanks Ben,

yes, I was going to reply to Jonah with the same comment which I realised by testing making the variable unordered and seeing in that case, the labels stay as the originals and not changed to Q and L.
However, your reply is more complete and gives the other option.

Thanks for all the help guys.

--
Reply all
Reply to author
Forward
0 new messages