Just to add more information, in the case it helps for further discussion.
The mixed model is the same for all approaches:
y = Xb + Za + e
where,
y = observations (phenotypes)
X = matrix accounting fixed effects
b = vector of fixed effects for each observation
Z = matrix addressing jth marker information within the ith animal
a = marker effects
e = residual term (noise)
As I said, each approach considers different prior distribution for
a.
So, in...
GBLUP - ai ~ N(0, sigma[a]^2)
Bayes A - ai ~ t(0, v, sigma[a]^2)
also, for Meuwissen,
ai ~N(0, sigma[ai]^2) quiSqrd[v]^-2 sigma[a]^-2
Bayes B - ai ~ pi(0) + (1-pi)t(0,v, sigma[a]^2)
(a fraction "pi" of markers has zero effect and another (1-pi) fraction has t distribution)
Lasso - ai ~ λ/2 exp (-λ|ai|)
Is it possible to implement such priors for "a"?
If not, is there another strategy I could implement in order to force an approximation for these priors?
(using Gaussian-like priors)
I was thinking, if maybe we could approximate the t distribution by the normal distribution as in...
http://www.m-hikari.com/ams/ams-2015/ams-49-52-2015/zogheibAMS49-52-2015.pdfWould it be feasible?
Another strategy would be applying weights for the marker effects (using Ridge Regression), that would get updated after each iteration.
I really think INLA method is an opportunity for turning Bayesian approach more widely applied in genomics.
I don't really need to do Bayes A, B... etc.... exactly as they are.
If there is a way to adapt INLA in order to implement different priors for "a", or adapt a mixture in order to approximate it to less skewed normal, then I want to try it.
Again, sorry for the long massage Prof. Rue.
I've been thinking of it and got to write ideas down.
Best regards and thank you for the attention.