Hi Anne,
thanks so much for your answer! Very much looking forward to the stuff you are working on!
I'm familiar with both papers, and you do show convincingly that with good data, you can distinguish both biases based on the mode vs. trailing edge of the RT distribution... (I'm actually trying that out in a different data set, testing whether reward and punishment magnitudes modulate starting point or drift rate, and was recently looking for posterior predictive checks, so your paper was very helpful for that!)
However, from a parameter estimation perspective, in my data, in presence of a drift intercept, the starting point bias takes (rather low) values that I find hard to interpret... in the HDDM implementation, unbiased starting points should be around 0.5; but with the drift intercept, the bias gets values around 0.25... so I wonder if interpreting individual differences in this parameter still makes any sense...?
I'm very curious about your ideas about "neural implementation"! For the history biases you look at, there seems some candidate plausible explanations for a drift rate intercept (e.g. stimulus- or motor-priming, i.e. your accumulate evidence for a certain stimulus/ response more easily if you've just seen/ executed it). However, in an RL-DDM context, what would the drift intercept mean? That people accumulate evidence for one choice option (left/ right or Go/NoGo) more easily, overall? Which begs the question of what "evidence" gets actually accumulated in such choices... I'm just worried that I'll get this question once I submit it, and I find it hard to motivate the drift intercept for my kind of task/ data...
Best regards,
Johannes