Posterior Predictive Check on hBayesDM object - seeking guidance

15 views
Skip to first unread message

Morgan Beatty

unread,
Jan 17, 2026, 1:05:38 PMJan 17
to hbayesdm-users

Hi all,


My team has been using the hBayesDM package for a few different tasks (two-stage decision making, Kirby questionnaire, Go-NoGo, and probabilistic reversal learning), and we are looking to perform posterior predictive checks (PPC) on our models. Our goals with the PPC are twofold - 1) validate model fit in a way that will appease any future reviewers, and 2) compare the fit across multiple models to determine which fits the data best - either graphically or by producing a numeric output. While we are aware of the hBayesDM functionality to produce LOOIC values for model comparison, the hierarchical models we use produce errors of Pareto-K values, which has been advised to not be of terrible concern for overall model fit (as they are common in hierarchical models, https://groups.google.com/g/hbayesdm-users/c/RXUTGeAs0x4/m/RMf4ZXwoAAAJ) but the pareto-k warning may indicate that LOO techniques are unreliable in the present data (https://groups.google.com/g/hbayesdm-users/c/9jxD94k6pPc/m/xpkNYwFFCgAJ). As such, the LOOIC procedure may not contribute accurate results. The field generally seems to favor PPC for model validation in general, but this secondary use for model comparison is valuable due to the LOOIC interpretation limitations we're up against.

 

To prepare for a PPC, we generate the predicted values using the "inc_postpred()" argument in the model definition. We'd like to use these values, compared to the actual values, to perform a PPC towards a graphical and/or numerical output. I'm wondering if there is existing hBayesDM functionality to do this in R? I found some helpful documentation for PPC's using the "bayesplot" and "rstanarm" packages (R 4.2.2), but these produce errors when a model object is fed into a function due to incompatibility with an hBayesDM object - they are designed for easy use specifically with stan objects. Similarly, I found some guidance from this group for PPC's that again point at the "rstan" package (socialRL/code/reinforcement_learning_HBA.R at master · lei-zhang/socialRL). Seeing as the `ts_par#()`, `dd_hyperbolic()`, `gng_m#()`, and `prl_rp()` functions produce an hBayesDM model object, these helpful functions designed for compatibility with stan objects are not easily useful here. 

 

The best option that might exist is a PPC framework that is friendly to hBayesDM objects - does anything like that exist? Alternatively, we can extract the real and predicted values from the hBayesDM object to manually fit them into bayesplot functionality. Or, apply a different statistical process to the extracted predicted and real values.

 

In sum, we're curious if there is existing functionality in place to support PPC using an hBayesDM object, or if there is any advisement from the toolbox creators on how a PPC should be conducted (e.g. an existing process, suggested statistical approach, etc.) using the data output from this toolbox. 


Many thanks, 


Morgan 


Jeongyeon Shin

unread,
Jan 19, 2026, 1:05:10 AMJan 19
to hbayesdm-users
Dear Morgan,

In hBayesDM, PPC can be conducted using y_pred.
(For example, in gng_m1.stan, predicted values are saved in the generated_quantities block as y_pred: https://github.com/CCS-Lab/hBayesDM/blob/develop/commons/stan_files/gng_m1.stan).

I would recommend checking '7)Posterior predictive checks' in the following tutorial, which explains how to conduct PPC: 

Please let me know if you need any further clarification.


Best regards,
Jeongyeon Shin


2026년 1월 18일 일요일 AM 3시 5분 38초 UTC+9에 morgb...@gmail.com님이 작성:

Morgan Beatty

unread,
Jan 20, 2026, 9:13:14 PMJan 20
to hbayesdm-users
Dear Jeongyeon, 

Thank you for your direction! We are familiar with the y_pred values - I omitted the direct reference to those values in my initial inquiry, but we able to generate those for all of the models of interest using the "inc_postpred()" argument in the initial model call. 

We also implemented that PPC instruction that you have linked from the main github page - unfortunately, that seems to exclusively apply to graphical representations of individual subject posterior predictive checks. If there is a way to generate a numeric value indicating individual-subject fit to the model, we could use that information to inform overall model fit - is that possible from that code? Alternatively, is there a way to represent the entire model fit across all participants (either graphically or numerically)? 

Many thanks, 

Morgan 

Jeongyeon Shin

unread,
Jan 20, 2026, 9:19:44 PMJan 20
to hbayesdm-users
Dear Morgan,

At the moment, hBayesDM does not provide a function that outputs numeric PPC summaries (either at the individual-subject or group level). The example in the tutorial is mainly intended to illustrate how to extract and visualize posterior predictive samples.

That said, if you’re interested in a numeric summary of fit, one common approach is to compute discrepancy measures directly from y_pred. For example, you could compare the model-predicted responses to the observed responses and compute subject-level accuracy (or other summary statistics), and then aggregate these across subjects if desired. Similarly, group-level summaries can be obtained by pooling trials across participants for each posterior predictive draw and comparing these values to the observed data.

I hope this helps, and please feel free to follow up if you have further questions.

Best regards,
Jeongyeon



2026년 1월 21일 수요일 AM 11시 13분 14초 UTC+9에 morgb...@gmail.com님이 작성:
Reply all
Reply to author
Forward
0 new messages