In order to perform a sanity check on the estimation process, one can make a set of the ground truth for parameters, make the inference, and then see how close is the recovered parameters to the ground truth! eg. using linear regression on the truth versus estimated.
Using Bayesian setting, this can be quantified by using posterior shrinkage/z-score (see Eqs 11, 12
here).
Before this, one needs to verify the convergence of algorithms (MCMC using the metrics in appendix of this
paper).
Using simulation-based inference, we can use posterior rank (see this
paper); To assess posterior calibration, we can use simulation-based calibration (see more details
here).
The posterior predictive check
(i.e., generating data from the model using the parameters drawn from
the estimated posterior and then comparing them with the observed data)
can validate the reliability of inference process, if it correctly fits the the (feature of) observation.
See main ref here.
Best,