Repeated measures, within-subject condition comparison

145 views
Skip to first unread message

Zsuzsika Sjoerds

unread,
Sep 18, 2018, 7:17:00 AM9/18/18
to hbayesdm-users
Hi all,

For a new study we performed using a reinforcement learning task, I would like to try the hBayesDM toolboox for model fitting, comparison, and parameter comparinson, The set-up of my study however makes me doubt about the approach I should take. This is a more conceptual problem in the context of hierarchical Bayes models, which might have practical consequences on how to proceed:
Our dataset exists of a repeated measures design: one intervention, one placebo condition within subjects, counterbalanced. We assessed the task during both conditions. So I have two datafiles per person. I want to know if parameters during intervention differ from parameters during placebo.
As the hierarchical Bayes takes the group mean into account, I wonder if I should fit all the data in one go (with the risk that both conditions regress to the mean, removing possible variance), or if I should model the two conditions separately (with the risk to inflate possible differences between conditions).


What would be wisdom here? I guess my main concern is how hBayesDM handles within-subject versus between-subject error and related (in)dependencies.

Thanks!

Lei Zhang

unread,
Sep 19, 2018, 9:43:46 AM9/19/18
to Zsuzsika Sjoerds, hbayesdm-users
Hi  - this is a good question. 

I would start with fitting data from the placebo condition first, which sets the baseline. 
Then, you could use the posterior from the placebo condition as the prior for the treatment condition, which tests how your treatment shifts the parameter.  
For modify the priors, go to R-3.x.x/library/hBayesDM/stan. 

I would also be curious about how Young and others will deal with this problem. 

Hope it helps. 
L.


---
Lei Zhang


--
You received this message because you are subscribed to the Google Groups "hbayesdm-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbayesdm-users+unsubscribe@googlegroups.com.
To post to this group, send email to hbayesdm-users@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbayesdm-users/abefe4eb-2f2d-432c-bdf6-6768c9581367%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Haines, Nathaniel B.

unread,
Sep 19, 2018, 10:18:06 AM9/19/18
to Zsuzsika Sjoerds, hbayesdm-users
Definitely a good question.

My way of approaching this in the past has been to create a difference variable that estimates the difference between conditions. Another way to think about it is to estimate a different parameter for each condition. 

To find out which (if any) parameters are affected by the manipulation, I fit various models that assume either no differences, difference in only parameter X, difference in only parameter Y, etc., and then I use model comparison to select the best model. 

Unfortunately, hBayesDM is not able to estimate within-subject parameter differences in its current state, so you would have to modify the stan code to do so. This is a feature that would be great to add in the future. 

Best,
Nate

From: hbayesd...@googlegroups.com <hbayesd...@googlegroups.com> on behalf of Zsuzsika Sjoerds <sjoer...@gmail.com>
Sent: Tuesday, September 18, 2018 7:17:00 AM
To: hbayesdm-users
Subject: Repeated measures, within-subject condition comparison
 
--
You received this message because you are subscribed to the Google Groups "hbayesdm-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbayesdm-user...@googlegroups.com.
To post to this group, send email to hbayesd...@googlegroups.com.

Zsuzsika Sjoerds

unread,
Aug 22, 2019, 6:37:35 AM8/22/19
to hbayesdm-users
Hi Lei Zhang,

Thanks for your tip! I like this approach you suggest, at least as a first try, before I change any Stan code, as Nathaniel suggested (my Stan skills are limited, so that will take me a while, and might be extremely error prone at this stage). The data has been lying still for a while, but now that I am back at it, I would like to indeed try to adjust the priors in the model of the treatment condition based on the posteriors of the 'winning' model of the placebo condition.

The LOOIC values for the 3 models I ran over the placebo data are as follows:
> printFit(output_par4_sh, output_par6_sh, output_par7_sh)
    Model    LOOIC LOOIC Weights
1 ts_par4 7739.925  3.382974e-26
2 ts_par6 7623.599  6.153726e-01
3 ts_par7 7624.539  3.846274e-01
There were 18 warnings (use warnings() to see them)

the warning message states multiple warnings in the line of:

1: Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.
2: In log(z) : NaNs produced

It seems ts_par6 wins here, but I am worried about the really small difference between models (and the warning messages). Visualizing the modeled parameters and the mcmc traces does not give me any indication something is wrongthere. I also removed all NaNs from the raw data.
Alternatively, using waic, I get the warning message that the p_waic is greater than .4, and that loo should be used instead.

But apart from that, I would like to take the posteriors to the treatment condition. Regarding that I it is unclear to me how to. I found the stan scripts for the models, and see that values can be initialized for v_mb v_mf and v_hybrid. Regarding priors of the model parameters, I assume it is most sensitive to have individual priors (allIndPars), and not just the group mu's. In that case, how do I give the individual priors to the model for the treatment condition? Through a separate textfile that I call in the modeling command? I am new to that (and stan code in general); so if there are any tips, or otherwise online resources that could guide me further, that would be helpful.

Thanks!



On Wednesday, September 19, 2018 at 3:43:46 PM UTC+2, Lei Zhang wrote:
Hi  - this is a good question. 

I would start with fitting data from the placebo condition first, which sets the baseline. 
Then, you could use the posterior from the placebo condition as the prior for the treatment condition, which tests how your treatment shifts the parameter.  
For modify the priors, go to R-3.x.x/library/hBayesDM/stan. 

I would also be curious about how Young and others will deal with this problem. 

Hope it helps. 
L.


---
Lei Zhang


On Tue, Sep 18, 2018 at 1:17 PM, Zsuzsika Sjoerds <sjoe...@gmail.com> wrote:
Hi all,

For a new study we performed using a reinforcement learning task, I would like to try the hBayesDM toolboox for model fitting, comparison, and parameter comparinson, The set-up of my study however makes me doubt about the approach I should take. This is a more conceptual problem in the context of hierarchical Bayes models, which might have practical consequences on how to proceed:
Our dataset exists of a repeated measures design: one intervention, one placebo condition within subjects, counterbalanced. We assessed the task during both conditions. So I have two datafiles per person. I want to know if parameters during intervention differ from parameters during placebo.
As the hierarchical Bayes takes the group mean into account, I wonder if I should fit all the data in one go (with the risk that both conditions regress to the mean, removing possible variance), or if I should model the two conditions separately (with the risk to inflate possible differences between conditions).


What would be wisdom here? I guess my main concern is how hBayesDM handles within-subject versus between-subject error and related (in)dependencies.

Thanks!

--
You received this message because you are subscribed to the Google Groups "hbayesdm-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hbayesd...@googlegroups.com.
To post to this group, send email to hbayesd...@googlegroups.com.

Lei Zhang

unread,
Aug 31, 2019, 9:40:16 AM8/31/19
to Zsuzsika Sjoerds, hbayesdm-users
Dear Zsuzsika,

many sorry for the slow reply. Please see my inline comments below. 

Best,
Lei

On Thu, Aug 22, 2019 at 12:37 PM Zsuzsika Sjoerds <z.sj...@gmail.com> wrote:
Hi Lei Zhang,

Thanks for your tip! I like this approach you suggest, at least as a first try, before I change any Stan code, as Nathaniel suggested (my Stan skills are limited, so that will take me a while, and might be extremely error prone at this stage). The data has been lying still for a while, but now that I am back at it, I would like to indeed try to adjust the priors in the model of the treatment condition based on the posteriors of the 'winning' model of the placebo condition.

The LOOIC values for the 3 models I ran over the placebo data are as follows:
> printFit(output_par4_sh, output_par6_sh, output_par7_sh)
    Model    LOOIC LOOIC Weights
1 ts_par4 7739.925  3.382974e-26
2 ts_par6 7623.599  6.153726e-01
3 ts_par7 7624.539  3.846274e-01
There were 18 warnings (use warnings() to see them)

the warning message states multiple warnings in the line of:

1: Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.
2: In log(z) : NaNs produced

This warning message is inevitably common for hierarchical models. The idea solution is to do K-fold CV, which may take some programming effort and time. For an example with a linear model, see here, https://avehtari.github.io/modelselection/rats_kcv.html#4_leave-one-out_cross-validation.
But in practice, at least from my previous simulation, even I do k-fold CV, the model comparison results may not change dramatically. So I guess you are fine for now. 

The second question is related to small difference on the IC scale. There are two ways to look at it. (1) just look at the nex column, LOOIC weights, which describes how likely each model is the "true" data-generating model. In your case, 0.615 vs 0.385.  (2) do a posterior predictive check, to see if your par6 model's prediction could better match the real data relative to your par7 model. 

 
It seems ts_par6 wins here, but I am worried about the really small difference between models (and the warning messages). Visualizing the modeled parameters and the mcmc traces does not give me any indication something is wrongthere. I also removed all NaNs from the raw data.
Alternatively, using waic, I get the warning message that the p_waic is greater than .4, and that loo should be used instead.

But apart from that, I would like to take the posteriors to the treatment condition. Regarding that I it is unclear to me how to. I found the stan scripts for the models, and see that values can be initialized for v_mb v_mf and v_hybrid. Regarding priors of the model parameters, I assume it is most sensitive to have individual priors (allIndPars), and not just the group mu's. In that case, how do I give the individual priors to the model for the treatment condition? Through a separate textfile that I call in the modeling command? I am new to that (and stan code in general); so if there are any tips, or otherwise online resources that could guide me further, that would be helpful.

In the stan() call, there is an input argument called init = ... By default it is "random", but in your case, you may want to give a list of individually specific parameters. You could check lines 418-461  in our wrapper function to get some inspiration (https://github.com/CCS-Lab/hBayesDM/blob/master/R/R/hBayesDM_model.R). 

Hope that helps, 
Lei
 
To unsubscribe from this group and stop receiving emails from it, send an email to hbayesdm-user...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hbayesdm-users/60ee7dfa-3ecb-49c4-944d-efcbdb580e4a%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages