Examining model fit - ts_par#

87 views
Skip to first unread message

Morgan Beatty

unread,
Nov 5, 2024, 8:32:58 PM11/5/24
to hbayesdm-users

Hello all,

My group have used a few of the hBayesDM models with great success on our data, and we are looking to continue using it to model a two-stage decision making task. There are three available functions to use for this task in the packages (ts_par4, ts_par6, and ts_par7) – we wanted to model our data using each of these functions, and then compare them using the recommended printFit() argument to see which fit the data best – then extract the parameters from the ‘winning’ model and continue our analyses from there. The code we used to generate these models is below:


mod_4param <- ts_par4(data = msdmdat, niter = 2000, nwarmup = 1000, nchain = 4, ncore = 4)

mod_6param <- ts_par6(data = msdmdat, niter = 2000, nwarmup = 1000, nchain = 4, ncore = 4)

mod_7param <- ts_par7(data = msdmdat, niter = 2000, nwarmup = 1000, nchain = 4, ncore = 4)


At the end of each model convergence, an error message is produced –  “Warning: Pareto k diagnostic value is 4.63. Resampling is disabled. Decreasing tol_rel_obj may help if variational algorithm has terminated prematurely. Otherwise consider using sampling instead.”

The models do converge, and when we compare the model fits using “printFit(mod_4param, mod_6param, mod_7param, ic="looic")”, we get another pareto-k error reference:


Warning messages:

1: Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.

2: Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.

3: Some Pareto k diagnostic values are too high. See help('pareto-k-diagnostic') for details.


Increasing the argument for ‘nthin’ did not ease these errors. We examined our behavioral data to confirm that - in general, across most participants - the task was being performed at sufficient levels of understanding by our participants, and we are seeing the behavioral performance patterns that we would expect from this task – so, we don’t think that there is an essential flaw in our data that would make it too noisy for model compatibility. As we confirmed this, we tried this same modelling process with the example data, and saw the same errors in the model fit comparison process. We also checked the ‘extract_ic()’ output for these models to see the distribution of the pareto k’s.

We are hoping for some guidance on how to better understand the fit of our data in each of these models, as ultimately our goal is extract the individual parameters from the best fitting model for this task and compare to some of our other tasks that participants completed. We saw references in another conversation thread (https://groups.google.com/g/hbayesdm-users/c/q05M9W959CI) to edit the stan code to “keep the trial-level likelihood and compute LOOIC on the resulting matrix”. We suspect that this may be our best path forward, but we are open to other advice/guidance on how to proceed! One basic aim would be to get more detail about which subjects are being well fit, and which poorly fit, by the model - how would we get this information?

If modifying the stan code is the ideal route, I am curious for guidance how to approach and interpret that process. Another possibility is that the model may need some modification - maybe some individuals/groups are better fit by a different architecture? What avenues are open to us here?

Thank you in advance for any insight!

Eunhwi Lee

unread,
Nov 14, 2024, 12:38:01 AM11/14/24
to hbayesdm-users
Hello, 

The warning about high Pareto k diagnostic values indicates that the leave-one-out cross-validation (LOO) approximation might be unreliable, which can sometimes occur when certain subjects' behavior diverges significantly from the model assumptions, even if overall performance looks consistent. 

Here are some steps and options to consider:

1. Increase model robustness by changing niter and nwarmup (to improve model stability)
2. Check subject-specific fit using loo package in R (to check each participant's Pareto k values to identify those poorly fit by the models)
3. Edit stan code (as you've mentioned)

Please feel free to reach out for more assistance!

Best,
Eunhwi
2024년 11월 6일 수요일 오전 10시 32분 58초 UTC+9에 morgb...@gmail.com님이 작성:

Nathaniel Haines

unread,
Nov 14, 2024, 10:13:08 AM11/14/24
to hbayesdm-users
Hi Morgan,

Have you read through this previous post? The discussion there covers the issues you note, along with some thoughts on moving forward.

If anything there is unclear or needs clarification, feel free to let us know.

Best,
Nate

Nathaniel Haines

unread,
Nov 14, 2024, 10:13:12 AM11/14/24
to hbayesdm-users
In addition to what Eunhwi described, if this is your goal:

> One basic aim would be to get more detail about which subjects are being well fit, and which poorly fit, by the model - how would we get this information?

Then you are best off looking at person-level posterior predictive checks. The discussion linked above has some examples of how to extract posterior samples from the model to plot them put against the raw data. LOOIC is only going to give you an idea of relative model performance across people,  so will not help you answer the above question. That said, you can also extract the pareto K values using the loo package to show which subjects are giving the problematic values. Check out the extract_ic function to see how to apply the loo function to models from hBayesDM. The part where we call loo is the relevant piece, with the result from that, you can look at person-level pareto k values, lpd scores. Etc.

Morgan Beatty

unread,
Nov 20, 2024, 11:05:32 AM11/20/24
to hbayesdm-users
Thank you both for your suggestions! I tried the niter and nwarmup increases (as well as increasing nthin as was suggested in another discussion thread) but the warning seems to be persisting. 

The person level posterior predictive checks would be immensely helpful. I looked at the post you linked above, and I'm curious at how to locate the y_pred values for these two stage models? I'm not finding the larger array that is in the y_pred object for the IGT task in the reference post. For the ts_par4 model, there is a 2 dimensional numeric vector (values of 217 and 199)  under model$fit@par_dims$y_pred_step1 and model$fit@par_dims$y_pred_step2, one for each of the two choice stages. We have 217 subjects in the current data set - these vectors seem to be indicating that, but I can't find the person-level posterior predictive checks. Is there a different location within the model object that I can find the array of individual y_pred values? Or do they need to be generated in some way? 

I will look into the loo package to identify individual model fit parameters, thank you for that suggestion! 

Reply all
Reply to author
Forward
0 new messages