Hey all,
I have posted several times about a funky model of mine, but I'm looking into all the various ways of examining the same question using variations on the model itself.
In one part of the model, there's a probability, delta, that two parameters are actually the same thing. I don't mean "the parameter for this group is essentially the same value as the parameter for another group", but rather they are literally the same thing; one should assume theta_1 = theta_2. This matters for the model because if they are the same, then the two groups inform the estimation of the single parameter rather than the two groups informing separate parameters.
Conceptually, what I would /like/ to do, is something like this. Say D=0 means they are the same, and D=1 means they are not. Delta is the probability of D.
y ~ p(parameter[group])
D ~ bernoulli(Delta)
if (D == 0) { parameter[2] = parameter[1] }
if (D == 1)}{ parameter[2] ~ normal(0,1)}
I know one needs to marginalize this for it to work in stan, but I'm at a loss for how one could do it in this case.
Conceptually, this is a bit like a mixture prior, where if D=0, then parameter[2] ~ normal(parameter[1], .0000001) [no real difference between parameter[2] and parameter[1]) and if D=1, then parameter[2] ~ normal(0,1) [it is permitted various values]. This is easy enough to code, but normal(parameter[1],.0000001) is very slow to sample due to the crazy curvature introduced by the prior.
Ideally, it would be a mixture between "we know the value is equal to parameter[1]" and "we know the value is independent of parameter[1]".
The inference is /not/ about whether parameter[2] - parameter[1], but this is a simple example of a much more complex model assessing measurement invariance.
Any ideas come to mind?