Is there a page giving information about initial values?

748 views
Skip to first unread message

Jiang Du

unread,
Sep 25, 2017, 8:53:41 PM9/25/17
to R-inla discussion group
Hi,
I read somewhere that good initial values can speed up modeling fitting process. For hierarchical models, how can I provide all the initial values for all the parameters?

For example:
y ~ Poisson(Er)
log(r) = a + bx + u
u~ BYM(tau1, tau2)
log(tau1) ~ log_gamma(...)
log(tau2) ~ log_gamma(...)

I assume I can provide the initials for a, b, and u (size 2n*1 for BYM model), together with
tau1 and tau2.

Daniel Simpson

unread,
Sep 25, 2017, 9:03:12 PM9/25/17
to Jiang Du, R-inla discussion group
1) we recommend using the bym2 parameterisation, which is more sensible and less sensitive to prior specification (and also easier to set the priors)

2) inla.doc(“bym2”) will bring up the model spec. If you look at the hyper specification, each parameter has an “initial” argument. NOTE: This is on the internal scale. For example, for a precision parameter, the initial value is on the log scale. 

D

--
You received this message because you are subscribed to the Google Groups "R-inla discussion group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion...@googlegroups.com.
To post to this group, send email to r-inla-disc...@googlegroups.com.
Visit this group at https://groups.google.com/group/r-inla-discussion-group.
For more options, visit https://groups.google.com/d/optout.

Jiang Du

unread,
Sep 25, 2017, 9:17:49 PM9/25/17
to R-inla discussion group
I like the idea of bym2 actually, after reading the paper about PC prior and the disease mapping model accounting for scaling. However, I am afraid of using that since I don't know how to handle additional random effects.

For example, for random effect (I use the notation used in those two papers) we have PC priors for the overall precision, and the mixing parameter phi to distribute variance among spatial and non-spatial part.

However, what if I have a RW2 effect? Should I follow the general variance partition rule in the original PC prior paper (last example)? And also, I have an individual level iid random error indexed by my observations (I am fitting a survival model actually):
# i: represent county i
# j: represent the j-th patient in county i 
Y ~ weibul(alpha, lambda)
log(lambda_{ij}) = alpha_i + stage_{ij} + age_{ij} + b_i + e_{ij}
where:
alpha_i: intercept
stage_{ij}: the cancer stage of the patient ij 
age_{ij}: the age of the patient, put in 12 categories, with values 1,2,...,12
b_i: I want to use BYM2 with PC priors
e_{ij}: lack of fit (error) at individual level

For the random effects age, b and e, I am not sure if I can assign sensible/correct PC priors. Actually I want to compare PC priors with the model above, which I already have result from Gibbs sampling with regular priors. In a word, I'm just confused how to follow PC prior framework to assign priors with complicated models.




在 2017年9月25日星期一 UTC-5下午8:03:12,Daniel Simpson写道:
1) we recommend using the bym2 parameterisation, which is more sensible and less sensitive to prior specification (and also easier to set the priors)

2) inla.doc(“bym2”) will bring up the model spec. If you look at the hyper specification, each parameter has an “initial” argument. NOTE: This is on the internal scale. For example, for a precision parameter, the initial value is on the log scale. 

D

On Mon, 25 Sep 2017 at 20:53, Jiang Du <jiang...@gmail.com> wrote:
Hi,
I read somewhere that good initial values can speed up modeling fitting process. For hierarchical models, how can I provide all the initial values for all the parameters?

For example:
y ~ Poisson(Er)
log(r) = a + bx + u
u~ BYM(tau1, tau2)
log(tau1) ~ log_gamma(...)
log(tau2) ~ log_gamma(...)

I assume I can provide the initials for a, b, and u (size 2n*1 for BYM model), together with
tau1 and tau2.

--
You received this message because you are subscribed to the Google Groups "R-inla discussion group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion-group+unsub...@googlegroups.com.

Haakon Bakka

unread,
Sep 26, 2017, 11:47:58 AM9/26/17
to Jiang Du, R-inla discussion group
The research into how to use PC priors in your case is not complete. We are working on related problems at the moment.

1 question: You have both alpha_i and b_i, but b_i includes an iid effect. Is this an error?

# What I would do, if I had to solve your problem today
Use separate pc priors for b_i and the other components. So that the prior for the rw2 and the e_ij are "pc.prec".

# Where do you have replicates?
How the priors influence your problem will depend heavily on where you have how many replicates. If you have 100 countries with thousands of patients in each, I am fairly certain any good (i.e. "not bad") prior should give the same result.

Kind regards,
Haakon Bakka




To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion-group+unsubscr...@googlegroups.com.
To post to this group, send email to r-inla-disc...@googlegroups.com.
Visit this group at https://groups.google.com/group/r-inla-discussion-group.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "R-inla discussion group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion-group+unsub...@googlegroups.com.
To post to this group, send email to r-inla-discussion-group@googlegroups.com.

Haakon Bakka

unread,
Sep 26, 2017, 11:50:42 AM9/26/17
to Jiang Du, R-inla discussion group
About initial values, you can look at the example at

If you run a simpler model, or reduced dataset, and then pick out the resulting values in the way shown, you can use those for the next run / more complicated model / full dataset.

Haakon


Jiang Du

unread,
Sep 26, 2017, 1:09:35 PM9/26/17
to R-inla discussion group
#1 question: You have both alpha_i and b_i, but b_i includes an iid effect. Is this an error?

Yes, it is an error. I made a mistake when typing my model. 
It should be
Y_{ij} ~ Weibull(alpha, lambda_{ij})
lambda_
{ij} = intercept + stage_{ij} + age_{ij} + b_i + e_{ij}
So the intercept is for all patients, b_i is trying to capture county effects, which will be assigned a BYM/BYM2 model.

I was concerned with the prior choices since in my last application (disease mapping problem), I used the regular Inverse-Gamma prior for all variance parameters,
but it turned out to be very sensitive to the shape and rate in Inverse-Gamma. Later on I changed to Half-Cauchy distribution
which was robust. In the process, I came across PC priors and really want to have a try in my current study.

By the way, if I want to use Half-Cauchy prior, I need to specify by myself right?

在 2017年9月26日星期二 UTC-5上午10:47:58,Haakon Bakka写道:
To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion-group+unsub...@googlegroups.com.
To post to this group, send email to r-inla-disc...@googlegroups.com.
Visit this group at https://groups.google.com/group/r-inla-discussion-group.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "R-inla discussion group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion-group+unsub...@googlegroups.com.

Haakon Bakka

unread,
Sep 27, 2017, 4:07:39 AM9/27/17
to Jiang Du, R-inla discussion group
That makes sense.

I have only bad things to say about the inverse gamma prior, so I am not surprised by your results! I suggest transforming it to the sigma scale and just looking at the prior distribution...

The PC prior is exponential on the standard deviation. Where (on what parameter) do you want half-caucy ? I am not sure if it is implemented or not. It should not be too much work for you to do yourself. Just note the the internal parametrisation is not the same as the "reported/external" parametrization.

Haakon




To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion-group+unsubscr...@googlegroups.com.
To post to this group, send email to r-inla-disc...@googlegroups.com.
Visit this group at https://groups.google.com/group/r-inla-discussion-group.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "R-inla discussion group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion-group+unsubscr...@googlegroups.com.

To post to this group, send email to r-inla-disc...@googlegroups.com.
Visit this group at https://groups.google.com/group/r-inla-discussion-group.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "R-inla discussion group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion-group+unsub...@googlegroups.com.
To post to this group, send email to r-inla-discussion-group@googlegroups.com.

Jiang Du

unread,
Sep 27, 2017, 3:01:28 PM9/27/17
to R-inla discussion group
About the inverse gamma prior, I've seen some papers are using them with scaled spatial effect. Those priors are like (1,0.01), (1,0.001) and (1,0.0005). Since they are using INLA, they implemented sensitivity analysis and show the robustness. Based on your experience, is that robustness resulted from scaling? In addition, I wonder if it is possible to scale the precision matrix with some parameter in it, for example, like the smoothing parameter $\rho$ a proper CAR prior. 

I will try to work on the Half Cauchy prior on standard deviation. I have found some pages regarding on customized prior.

在 2017年9月27日星期三 UTC-5上午3:07:39,Haakon Bakka写道:
To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion-group+unsub...@googlegroups.com.
To post to this group, send email to r-inla-disc...@googlegroups.com.
Visit this group at https://groups.google.com/group/r-inla-discussion-group.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "R-inla discussion group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion-group+unsub...@googlegroups.com.
To post to this group, send email to r-inla-disc...@googlegroups.com.
Visit this group at https://groups.google.com/group/r-inla-discussion-group.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "R-inla discussion group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion-group+unsub...@googlegroups.com.

Haavard Rue

unread,
Sep 27, 2017, 3:09:51 PM9/27/17
to Jiang Du, R-inla discussion group
check out

inla.doc("pc.prec")

it does not beutifully.

check out

inla.doc("expression")

for adding your own. Note the internal scale, and that its the
log(prior) that is returned.

H
> > > > > > > from it, send an email to r-inla-discussion-group+unsubsc
> > > > > > > ri...@googlegroups.com.
> > > > > > > To post to this group, send email to r-inla-disc...@googl
> > > > > > > egroups.com.
> > > > > > > Visit this group at https://groups.google.com/group/r-inl
> > > > > > > a-discussion-group.
> > > > > > > For more options, visit https://groups.google.com/d/optou
> > > > > > > t.
> > > > >
> > > > > --
> > > > > You received this message because you are subscribed to the
> > > > > Google Groups "R-inla discussion group" group.
> > > > > To unsubscribe from this group and stop receiving emails from
> > > > > it, send an email to r-inla-discussion-group+unsubscribe@goog
> > > > > legroups.com.
> > > > > To post to this group, send email to r-inla-disc...@googlegro
> > > > > ups.com.
> > > > > Visit this group at https://groups.google.com/group/r-inla-di
> > > > > scussion-group.
> > > > > For more options, visit https://groups.google.com/d/optout.

--
Håvard Rue
hr...@r-inla.org

Daniel Simpson

unread,
Sep 27, 2017, 3:11:00 PM9/27/17
to Jiang Du, R-inla discussion group
On 27 September 2017 at 15:01, Jiang Du <jiang...@gmail.com> wrote:
About the inverse gamma prior, I've seen some papers are using them with scaled spatial effect. Those priors are like (1,0.01), (1,0.001) and (1,0.0005). Since they are using INLA, they implemented sensitivity analysis and show the robustness. Based on your experience, is that robustness resulted from scaling? In addition, I wonder if it is possible to scale the precision matrix with some parameter in it, for example, like the smoothing parameter $\rho$ a proper CAR prior. 

There is some robustness that comes along from scaling (see Sørbye, S. H. and Rue, H. (2014). Scaling intrinsic Gaussian Markov random field priors in spatial modelling. Spatial Statistics 8 39–51.)

But the use of gamma priors on precisions (or inverse gammas on variances)  is a bad idea.  We have some unpublished [working on it] work that suggests they learn the true parameter at a slower rate than other options.  There is also a very detailed simulation study here:
Klein, N., Kneib, T. et al. (2016). Scale-Dependent Priors for Variance Parameters in Struc- tured Additive Distributional Regression. Bayesian Analysis 11 1071–1106. 
 
I will try to work on the Half Cauchy prior on standard deviation. I have found some pages regarding on customized prior.

We strongly recommend the exponential prior for reasons outlined at length here: https://arxiv.org/pdf/1403.4630.pdf.  

In our experience, the half-Cauchy's tail is too heavy and can lead to problems.  Gelman no longer recommends it. If for some reason you don't want to use the exponential, a half Student-t distribution with 3 or more degrees of freedom is another option.  (Again, see the simulations of Klein and Kneib). 

More discussion on this is here: https://arxiv.org/abs/1708.07487

Hope this helps,

Dan
 
To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion-group+unsubscr...@googlegroups.com.
To post to this group, send email to r-inla-disc...@googlegroups.com.
Visit this group at https://groups.google.com/group/r-inla-discussion-group.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "R-inla discussion group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion-group+unsubscr...@googlegroups.com.

To post to this group, send email to r-inla-disc...@googlegroups.com.
Visit this group at https://groups.google.com/group/r-inla-discussion-group.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "R-inla discussion group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion-group+unsubscr...@googlegroups.com.

To post to this group, send email to r-inla-disc...@googlegroups.com.
Visit this group at https://groups.google.com/group/r-inla-discussion-group.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "R-inla discussion group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion-group+unsub...@googlegroups.com.
To post to this group, send email to r-inla-discussion-group@googlegroups.com.

Helpdesk

unread,
Sep 27, 2017, 3:11:04 PM9/27/17
to Jiang Du, R-inla discussion group
On Wed, 2017-09-27 at 22:09 +0300, Haavard Rue wrote:
> check out
>
> inla.doc("pc.prec")
>
> it does not beutifully.
>

'it', not 'not'

--
Håvard Rue
Helpdesk
he...@r-inla.org

Jiang Du

unread,
Sep 27, 2017, 3:45:59 PM9/27/17
to R-inla discussion group
I am really interested in the topic of choosing the "correct" priors (Jeffrey, reference priors, and PC priors, weakly informative priors, etc), but I recently noticed that thetopic is too hard for me right now ( being a PhD student on my own effort). The papers you mentioned are of great help to me in order to keep up with most recent ideas. Thanks!

在 2017年9月27日星期三 UTC-5下午2:11:00,Daniel Simpson写道:


To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion-group+unsub...@googlegroups.com.
To post to this group, send email to r-inla-disc...@googlegroups.com.
Visit this group at https://groups.google.com/group/r-inla-discussion-group.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "R-inla discussion group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion-group+unsub...@googlegroups.com.
To post to this group, send email to r-inla-disc...@googlegroups.com.
Visit this group at https://groups.google.com/group/r-inla-discussion-group.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "R-inla discussion group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion-group+unsub...@googlegroups.com.
To post to this group, send email to r-inla-disc...@googlegroups.com.
Visit this group at https://groups.google.com/group/r-inla-discussion-group.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "R-inla discussion group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to r-inla-discussion-group+unsub...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages