Why scaling predictor variables, and how to back-transform model-averaged estimates?

426 views
Skip to first unread message

Nadège Bélouard

unread,
Oct 10, 2016, 6:27:16 AM10/10/16
to unmarked
Hi everyone,
I am trying to use unmarked and especially its function colext, but I have a few very general questions.

First, I observed that predictor variables are generally scaled before building any model. I do not use this transformation in classical binomial GLM and I cannot find any explicit recommendation about that on this forum. Is it absolutely necessary to scale every predictor and why? I tried to compare some model results with and without this transformation and it seems to have an importance, as it did change their AIC-ranking.

Second, I read we need to back-transform coefficients obtained when the model includes some predictors. However, when using model averaging (with modavg from AICcmodavg), nothing is mentionned about this back-transformation: we get averaged estimates and their confidence interval, but I never read about any back-transformation on these averaged estimates. I can't find any function that would apply on this result. Am I missing any step?

Any help would be greatly appreciated!
Nadège

Matthew Giovanni

unread,
Oct 10, 2016, 8:37:00 AM10/10/16
to unma...@googlegroups.com

You might cite this paper:

http://www3.interscience.wiley.com/cgi-bin/fulltext/123278890/PDFSTART

Matt Giovanni
608-320-9331


--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

John Clare

unread,
Oct 10, 2016, 9:55:08 PM10/10/16
to unmarked
Just to add on:

1) Gelman and Hill's book (2007) also discusses the potential gains from scaling variables in depth (most--if not all--of this is covered in the paper Matt provided). An additional consideration is that covariates that take on a large range of values can cause optimization problems (or a set of covariates that take on very different ranges of values).

2)  modavgPred provides model averaged parameter estimates on the real-scale; modavgShrink appears to be the more appropriate means or deriving model-averaged beta coefficients, and maybe modavgEffect is better for assessing model-averaged differences between categorical variables (but Marc Mazerolle posted something regarding this not too long ago, which is the better place to look).

Nadège Bélouard

unread,
Oct 12, 2016, 5:23:04 AM10/12/16
to unmarked
Thanks a lot for your help! I didn't know this paper, things are clearer to me now.
Reply all
Reply to author
Forward
0 new messages