cross-validating the hddm

132 views
Skip to first unread message

anne...@gmail.com

unread,
May 1, 2019, 2:23:11 PM5/1/19
to hddm-users
Hi HDDM'ers,

I've read quite a few discussion threads about different model comparison metrics for the HDDM, with some mentioning cross-validation. However, I haven't been able to find if anyone has a working implementation of cross-validating HDDMs (either leave-one-out or K-fold, probably depending on the dataset size and model sampling time). 

Does anyone have a working example or implementation of HDDM cross-validation they could share?

Thanks!

Best,

Anne E. Urai, PhD
Postdoctoral Fellow, Cold Spring Harbor Laboratory
www.anneurai.net / @AnneEUrai

Krista Bond

unread,
May 1, 2019, 2:33:24 PM5/1/19
to hddm-...@googlegroups.com
Seconded!

--
You received this message because you are subscribed to the Google Groups "hddm-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hddm-users+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Sam Mathias

unread,
May 1, 2019, 3:16:18 PM5/1/19
to hddm-users
My guess would be no, because CV is hard. There is an implementation of LOO-CV in pymc3 (not pymc2), but backporting this would be non-trivial.

Hope I'm wrong!

Thomas Wiecki

unread,
May 1, 2019, 4:31:36 PM5/1/19
to hddm-...@googlegroups.com
You're unfortunately correct. 

Michael J Frank

unread,
May 1, 2019, 5:12:49 PM5/1/19
to hddm-...@googlegroups.com
We will be releasing an update to hddm soon - more on that in a bit. It won't include this, but I have a student who is planning to implement other metrics soon (summer). In the meantime I  recommend as always that one should consider posterior predictive checks on the theoretically meaningful measures of any given study at least as much  as any one model selection metric. 
Michael J Frank, PhD 
Edgar L. Marston Professor of Cognitive, Linguistic and Psychological Sciences
Laboratory of Neural Computation and Cognition
Carney Institute for Brain Science

Shira Lupkin

unread,
May 1, 2019, 7:13:43 PM5/1/19
to hddm-...@googlegroups.com
In the new version, are there work arounds to run posterior predictive checks on regression models (specifically, using utils.post_pred_gen)? 
Shira Lupkin
Graduate Student
Behavioral and Neural Sciences Program
Rutgers University, Newark

Michael J Frank

unread,
May 4, 2019, 8:56:12 AM5/4/19
to hddm-...@googlegroups.com
 probably not yet - though Mads can chime in on how one can do this external to HDDM after extracting the chains, and perhaps we will add some example pseudocode.


Shira Lupkin

unread,
May 4, 2019, 5:52:29 PM5/4/19
to hddm-...@googlegroups.com
That would be very helpful, thanks! 

Mads Lund Pedersen

unread,
May 5, 2019, 10:40:01 AM5/5/19
to hddm-...@googlegroups.com
My approach to doing posterior predictive checks on regression models have been to loop through the entire data and recreate predicted trial-by-trial parameter values and generate data from that. Unfortunately, I don't have a function to access the model, so I have to modify it for each specific model. But I'm happy to share some code when we release the update/extension to HDDM. 

Best,
Mads 

Shira Lupkin

unread,
May 5, 2019, 10:49:27 AM5/5/19
to hddm-...@googlegroups.com
Hi Mads, 

Is there any chance You would share code before the update? 

Shira

Mads Lund Pedersen

unread,
May 5, 2019, 10:53:29 AM5/5/19
to hddm-...@googlegroups.com
I only have code in R, and would like to have it in python if I were
to share it on the forum. But I can send you a personal mail with the
R-code.
Reply all
Reply to author
Forward
0 new messages