Negative drift rate

844 views
Skip to first unread message

Claire O'Callaghan

unread,
Nov 20, 2016, 8:13:59 AM11/20/16
to hddm-users
Hi,

I am running a StimCoding model. I have three groups and three conditions; the responses are left and right and they are equivalent, so there is no reason to suspect bias. For this reason, I want to estimate a single drift rate. Setting split_param='v' gives a single drift rate, however this gives negative values for the drift rate (by contrast, setting split_param='z' gives positive drift rates, but they are separate for left and right stimuli which is not what I'm after). 

I am assuming I could just take the absolute values of the negative drift rate for ease of interpretation/plotting. But before going ahead with this I wanted to double check whether this would be correct, as it seems strange that the output for a single drift rate parameter would be given as negative. Also, if there was anything I could specify in my code to request a positive v rather than a negative, that would be even more helpful.

#N.B. stim_2 = condition; response = stimulus coded response, i.e., Left or Right response; stimulus = L or R

#Model
m = hddm.HDDMStimCoding(data, group_only_nodes=['v','a','t'], stim_col='stimulus', split_param='v', depends_on={'v': ['group','stim_2'],'a':['group', 'stim_2'],'t': 'group'},  p_outlier=0.05) 
m.find_starting_values() 
m.sample(120000, burn=20000, thin=10, dbname='traces.db', db='pickle') 
m.save('Stimcode_model_10')

Many thanks,
Claire

Thomas Wiecki

unread,
Nov 21, 2016, 4:19:30 AM11/21/16
to hddm-...@googlegroups.com
Hi Claire,

Negative drift in this case would mean that your subjects are at less than 50% accurate. If that's not the case, you might have mislabeled your columns.

Best,
Thomas

--
You received this message because you are subscribed to the Google Groups "hddm-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hddm-users+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Claire O'Callaghan

unread,
Nov 21, 2016, 6:52:28 AM11/21/16
to hddm-users
Hi Thomas,

Thanks for your quick reply and explanation. 
Accuracy is actually at 100% in the task. So the response column (coded 1s and 0s to represent whether a left or right response was chosen), and the stimulus column (coded L and R, for the left and right choices) are matched. So I'm not sure where the mislabelling might be occurring and why my current set up would lead to a less than 50% accuracy being modelled.

Best wishes,
Claire


 
To unsubscribe from this group and stop receiving emails from it, send an email to hddm-users+...@googlegroups.com.

Thomas Wiecki

unread,
Nov 21, 2016, 7:08:50 AM11/21/16
to hddm-...@googlegroups.com
I'm afraid with 100% accuracy the HDDM will not do much for you in the first place. I also see that you have group in depends_on, is this a between-group design?

To unsubscribe from this group and stop receiving emails from it, send an email to hddm-users+unsubscribe@googlegroups.com.

Claire O'Callaghan

unread,
Nov 21, 2016, 7:20:39 AM11/21/16
to hddm-users
I had figured 100% accuracy might be a problem with the standard hDDM model, but I thought a StimCoding model could accommodate it. 
Is there any way around this? 
It is a between group design (three groups: controls and two subtypes of Parkinson's disease). I opted for the depends_on group as I was having convergence issues with parameter a. After adding the depends on, with the current model it is converging well.

Claire O'Callaghan

unread,
Nov 21, 2016, 7:30:41 AM11/21/16
to hddm-users
Just to add -- for what it's worth in the case that the 100% accuracy might throw the model, when plotting the posteriors and running hypothesis testing on the results of the current model, the results appear to make sense given our predictions about these patient groups. And for the parameter t we have replicated a finding from an earlier study we conducted with Parkinson's patients. So given the good convergence and the plausible results, I hadn't thought the 100% might be causing such a problem?  

Thomas Wiecki

unread,
Nov 22, 2016, 8:07:07 AM11/22/16
to hddm-...@googlegroups.com
It doesn't have to be a problem.

The negative drift-rate issue is weird. If you say 100% accuracy, your stim column should have identical values as your response column.

To unsubscribe from this group and stop receiving emails from it, send an email to hddm-users+unsubscribe@googlegroups.com.

Claire O'Callaghan

unread,
Nov 22, 2016, 8:46:37 AM11/22/16
to hddm-users
Ok, good to hear I can still go ahead even with the 100% accuracy. I agree the negative drift is weird then and that there doesn't seem to be an obvious explanation. I will plan to go ahead with taking the absolute values of the negative drift rate for ease of interpretation/plotting (unless this sounds problematic), but will post here if I discover the reason for it.
Thanks for the assistance,
Claire

Michael J Frank

unread,
Nov 23, 2016, 12:54:16 PM11/23/16
to hddm-...@googlegroups.com
 Just looking at this and it seems that you might have simply inverted the response column:

"response column (coded 1s and 0s to represent whether a left or right response was chosen), and the stimulus column (coded L and R, for the left and right choices"
 
so here the left/right response is coded as 1/0 but the stim column  coded as L/R and there is nothing to ensure that L goes with 1 and R with 0. 

The issue Thomas points out with 100% accuracy is that it may be more difficult to tease apart threshold from drift rate from non-decision time, since these all can mimic changes in RT but they make different predictions for how those changes should be accompanied by differences in accuracy. 

To unsubscribe from this group and stop receiving emails from it, send an email to hddm-users+unsubscribe@googlegroups.com.

Claire O'Callaghan

unread,
Nov 23, 2016, 1:14:06 PM11/23/16
to hddm-users, Michae...@brown.edu
Hi Michael,
Thanks for your comment on this and I think you are right -- I just now finished running my model again with both the resp and stim columns both coded as 0s/1s, and this seems to have solved the problem, i.e., gives the exact same parameter estimates I had, but now v is positive. 

Ok, that makes sense re the 100% accuracy. Although the a and t parameters I've ended up with at least appear to be distinct from the drift rate, as there are distinctive between group patterns emerging across each of the three parameters. Would there be any way to verify that the 100% accuracy is not throwing off the model in some way -- or is it enough that I have good convergence (R hat and visual inspection), and that between-group comparisons of the posterior distributions show different patterns for each parameter (and indeed, t replicates a pattern we saw comparing PD patients vs controls on a completely different task).    

Thanks,
Claire

Michael J Frank

unread,
Nov 25, 2016, 1:24:10 PM11/25/16
to hddm-...@googlegroups.com

 Hi Claire,

see below

On Wed, Nov 23, 2016 at 1:14 PM, Claire O'Callaghan <co...@cam.ac.uk> wrote:
Hi Michael,
Thanks for your comment on this and I think you are right -- I just now finished running my model again with both the resp and stim columns both coded as 0s/1s, and this seems to have solved the problem, i.e., gives the exact same parameter estimates I had, but now v is positive. 

great. 

Ok, that makes sense re the 100% accuracy. Although the a and t parameters I've ended up with at least appear to be distinct from the drift rate, as there are distinctive between group patterns emerging across each of the three parameters. Would there be any way to verify that the 100% accuracy is not throwing off the model in some way -- or is it enough that I have good convergence (R hat and visual inspection), and that between-group comparisons of the posterior distributions show different patterns for each parameter (and indeed, t replicates a pattern we saw comparing PD patients vs controls on a completely different task).    

Those are both good and encouraging. Other approaches that would increase confidence that it is capturing a real distinction between the parameters:

* posterior predictive checks - does the model capture the observed quantiles of RTs and differences in patients and controls. 

* parameter recovery - if you generate data from the model with these DDM parameters that produces 100% accuracy, and vary the parameters of a, t and v in the same ranges that you estimate empirically, can you then fit those generated data and recover the generated parameters. That would increase confidence that the model can separate the parameters presumably because they have different effects on the shape of RT distributions, despite 100% accuracy. I believe Thomas has some code that you could use based on the recovery simulations we did in the HDDM software paper.

* Short of doing the recovery, you could also look at the covariance among your estimated parameters, e.g. the joint distribution over a and t to see how collinear they are.
 
To unsubscribe from this group and stop receiving emails from it, send an email to hddm-users+unsubscribe@googlegroups.com.

Claire O'Callaghan

unread,
Nov 26, 2016, 6:20:49 PM11/26/16
to hddm-users
Thanks very much for that Michael. I was planning on doing posterior predictive checks, but hadn't thought about parameter recover which sounds like a good suggestion.
I found an older post where Thomas posted a link to the code for parameter recovery, so I'll try and apply that. Here's the link to that older post in case anyone else is interested: 

Reply all
Reply to author
Forward
0 new messages