--
You received this message because you are subscribed to the Google Groups "hddm-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hddm-users+...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
def prob_ub(v, a, z):
"""Probability of hitting upper boundary."""
return (exp(-2*a*z*v) - 1) / (exp(-2*a*v) - 1)
Hi Thomas,I deleted my earlier reply because rethinking it showed me that the solution i proposed there does not work.Rethinking it led me to the solution Imri had suggested earlier (sorry for not picking it up right then!).This solution entails simply calculating the log likelihood assum_log_p = sum_log_response_trials + sum_logp_NOresponse_trialswhereby sum_logp_NOresponse_trials is simply the number of trials without response multiplied by the log probability that the respective boundary is hitOne (inelegant) way to implement this is to give the no-response trials a specific response time by which they can be identified as such, and use this when calculating the likelihoods in likelihoods.py.
You find one such implementation below.There are several inefficiencies in the implementation below, which mostly have to do with my inexperience in working with python.I'd like to implement a more efficient version before I do a simple model-recovery exercise to test how a,v,z can be recovered from data where non-responses are deleted.Here are the 2 things I'd like to know so that I can make a more efficient implementation:1) Where would I need to change the code to change the variables submitted to wfpt_like? (I am thinking of directly passing the number of no-response trials instead of calculating them in each call of wfpt_like)
2) I think it would be even faster to let wfpt.wiener_like do the entire likelihood calculation. would implementing this be as easy as defining a new function in wfpt.pyx?
Cheers - Guido#### modified wfpt_like for data from the CPT (or n-back, go-nogo ...) #####NOTE!# I assume a bias towards responding here,# so that no-response trials for target-trials have negative "fake RTs" and# no-response trials for non-target-trials have positive "fake RTs"def wfpt_likeCPT(x, v, sv, a, z, sz, t, st, p_outlier=0):# separate trials into no-response and response trials# here, I use .55555 to identify no-response trialsresp = x['rt'].abs() != .55555# get sum of log p for trials with response as usualsum_logp_resp = hddm.wfpt.wiener_like(x['rt'][resp], v, sv, a, z, sz, t, st, p_outlier=p_outlier, **wp)
# get sum of log p for trials with no response as log p over the probability to go to the lower or upper boundary# lower boundary for target-trials, upper boundary for non-target trialsp_noresp = (np.exp(-2*a*z*v) - 1) / (np.exp(-2*a*v) - 1)if x['rt'][resp==0][0] < 0:p_noresp = 1-p_norespsum_logp_noresp = np.log(p) * (x['rt'].count() - sum(resp))
Now to the results, which are in the attached figures:
PR_HDDM4CPT.png shows true parameter values and posterior means for 64 parameter combinations (4z*4a*4v, t was fixed at .2) which produce an accuracy in the range typically found in the CPT.
The data used here are all trials with RT for correct responses and false alarms, + the number of misses and the number of correct rejections. I generated medium sized data sets of 1800 trials (i.e. 5 Conners CPTs)
PR_HDDM_fulldata.png shows for comparison the recovered parameters when we use the full data set (i.e. misses and correct non-responses are included with their RTs.)
The key results are:
- parameters are (as expected) well recovered when using the full data
- when using only CPT data, this leads to underestimation of v and overestimation of a and z
- fortunately, this is a simple linear bias such that the order of the recovered parameters is (essentially) the same as the order of the true parameters
NOTE that all these results are for the Conners CPT, which as 90% target trials that require a response. I would expect the bias to be bigger for other CPTs with fewer target trials.
All in all, it looks to me as if this is usable. I would be happier without the bias, but it is not big problem for my application, where I am more interested in finding differences within participants and between groups.
Finally I just repeat that any feedback about the likelihood function would be welcome!
Cheers - Guido
...
--
You received this message because you are subscribed to the Google Groups "hddm-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hddm-users+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
# - accuracy coding such that correct responses have a 1 and incorrect responses a 0# - usage of HDDMStimCoding for z
Thanks for offering, it'd be great if you could do a PR for this.
Thomas
--
You received this message because you are subscribed to the Google Groups "hddm-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hddm-users+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/hddm-users/8042f07d-1adc-4717-93d5-b035240bf91b%40googlegroups.com.