question about hBayesDM for dot probe task

61 views
Skip to first unread message

Mikael Rubin

unread,
Oct 29, 2024, 8:48:44 PM10/29/24
to hbayesdm-users
Hi all,

My basic question is: Is it necessary to model an observed keypress or is it acceptable to model choice as a task-based construct (i.e., task condition)?

The dot probe task is intended to detect covert attention processing related to stimulus specific (e.g., affective vs. neutral) information with the probe replacing either the affective ("congruent") or the neutral ("incongruent") stimulus. In the past, researchers have looked at accuracy modeled as the difference between ddm parameters of the two conditions of the dot probe using accuracy for each condition. I was wondering whether, it is acceptable to only look at accurate trials and the "choice" is the condition (threat vs. neutral), where there is no actual keypress being modeled. My thinking is that the interpretation of model parameters would be different (i.e., they would represent the underlying latent covert processing of affective information), but I was not sure this a completely invalid use of drift diffusion modelling since the modelling needs to be directly estimated using observed keypress choices (to determine a decision like accuracy)?

Thank you!
Mikael

Eunhwi Lee

unread,
Nov 14, 2024, 12:11:21 AM11/14/24
to hbayesdm-users
Dear Mikael,

I apologize for the delayed response.

In general, computational models, including DDM, require observed responses or behaviors from participants to estimate latent parameters (e.g., drift rate). This approach allows us to capture differences across conditions in the underlying decision processes.

If your primary goal, however, is to model the conditions directly (perhaps for predictive purposes or to understand intrinsic differences in processing each condition), it may be feasible to analyze the conditions as you suggested. But this would shift the interpretation of parameters and wouldn’t capture decision dynamics in the traditional sense. Moreover, this approach could contradict the foundational premise of computational modeling, which seeks to explain behaviors as a result of underlying latent parameters. Since trial conditions are predetermined rather than influenced by latent parameters, careful consideration and justification of parameter interpretations would be essential for reviewers and readers to validate this approach.

Additionally, DDM traditionally incorporates reaction time (RT) as an observed variable to model the information accumulation process. In practice, identifying an appropriate RT input in this context could pose a challenge.

Please do not hesitate to reach out if you need further discussion or assistance.

Best,
Eunhwi

2024년 10월 30일 수요일 오전 9시 48분 44초 UTC+9에 mru...@paloaltou.edu님이 작성:

Mikael Rubin

unread,
Nov 19, 2024, 12:50:34 AM11/19/24
to hbayesdm-users
Dear Eunhwi,

Thank you so much for your reply. You highlight exactly my concerns, namely: "But this would shift the interpretation of parameters and wouldn’t capture decision dynamics in the traditional sense. Moreover, this approach could contradict the foundational premise of computational modeling, which seeks to explain behaviors as a result of underlying latent parameters." I am especially concerned with the "could contradict" piece - I am trying to figure out if I can conceptualize observed RTs across "choice" represented as congruent/incongruent (note that these labels reflect threat/target location, not true congruence as an interference parameter in a task like the Stroop task) as reflecting a latent underlying parameter explaining attentional bias (decision making that reflects preferentially guided covert attention based on stimulus valence).  Also, Nate posted in 2020: "We have it coded up so that you can think of it in terms of decision boundaries, rather than accuracy. So, the `choice` variable is 1 or 2, indicating the lower or upper decision boundary, respectively (see the help files for more details). Therefore, a positive drift rate estimate indicates more evidence toward whatever choice represents the upper boundary, and vice-versa. In the preprocessing code, the RTs are split into upper and lower boundary responses according to the `choice` variable, and we then increment the log likelihood on the upper boundary RTs and lower boundary RTs separately:"  And I see this as perhaps supporting my approach, although I am concerned I may have misunderstood the takeaway.

Based on the above, is it plausible to interpret the ddm parameters for the dot probe as follows:
  • alpha represents the deliberation threshold, reflecting the impact of the stimuli on the decision process and response commitment.
  • beta reflects the pre-existing tendency to expect the probe to follow either the threat (congruent) or neutral (incongruent) stimulus; the stimuli are presented simultaneously and counterbalanced for location (left/right).
  • delta represents the rate of evidence accumulation, influenced by the salience of the threat or neutral stimulus.
  • tau represents the non-decision time, primarily accounting for motor response and sensory processing.

There may not be an authoritative answer, but if you or other folks have any additional thoughts they would be very welcome! Thank you again for taking the time.

Sincerely,
Mikael

Eunhwi Lee

unread,
Nov 25, 2024, 12:01:35 AM11/25/24
to hbayesdm-users
Dear Mikael,

Thank you for your detailed explanation and thoughts. Using “choice” as a task-based construct rather than an observed response is an interesting approach, but it does differ from the traditional use of DDM, which models latent parameters based on observed reaction times and accuracy.

Your proposed parameter interpretations—such as alpha representing the deliberation threshold and beta reflecting pre-existing expectancy bias—seem logical and align well with your framework. However, parameters like delta (drift rate) and beta (starting point bias) are traditionally tied to observable data, so reinterpreting them based on task conditions alone might reduce their grounding in the decision-making process they are intended to model.

If you decide to move forward with this approach, it may be helpful to frame it as a conceptual adaptation of DDM while explicitly addressing these challenges. Combining observed data with task conditions, if possible, could also enhance the interpretability of the parameters.

Another possibility is considering an alternative approach: fitting the DDM parameters separately for the congruent and incongruent conditions. By modeling these conditions independently, you could directly compare parameters like drift rate, boundary separation, or non-decision time between the two conditions. This would allow you to assess, for example, whether drift rate differs significantly between congruent and incongruent trials, reflecting differences in evidence accumulation based on stimulus salience.

I hope this helps, and I’d be happy to discuss further if needed.

Best regards,
Eunhwi

2024년 11월 19일 화요일 오후 2시 50분 34초 UTC+9에 mru...@paloaltou.edu님이 작성:
Reply all
Reply to author
Forward
0 new messages