> On Feb 21, 2016, at 7:01 PM, Kevin Van Horn <
ke...@ksvanhorn.com> wrote:
>
> The R2 prior, like Zellner's g prior, depends on the matrix of covariates X. This feature of both priors troubles me, for these reasons:
>
> 1. It's not clear to me why one's prior on the regression coefficients should depend on the observed values of the input variables.
I've always worried about this, too --- the exact same
situation comes up from standardizing predictors. I was
sort of thinking muddily along the lines of your answer (3) --
that with a big enough N, you're approaching the population
distribution close enough for the kind of applied statistics
we do.
It's also come up before with constraints, where if you want
alpha * x[n] + beta > 0
for each data point n in 1:N, then even if you satisfy the
constraint for every point in x, there's no reason to
expect you won't violate the constraint with x[N + 1].
...
> 2. Are such priors even consistent with Bayes' Rule?
...
> 3. One possible way around the problem of (2) above is to make the prior for beta depend, not on X itself, but on other parameters that determine the distribution for the vector of input variables. That is, we have
>
> x_i ~ Distribution_x(psi)
> beta ~ Distribution_b(psi)
> y_i ~ Normal(x_i' * beta, sigma)
...
- Bob