The downside to choosing one model (via ML or by intuition) is that you are ignoring uncertainty about which model is better, and the purist Bayesian approach would then be to account for that extra uncertainty by integrating over it (eg using bmodeltest). Pragmatically it is probably most of the time fine to stick with one model, possibly chosen using a model testing approach, or chosen based on practical concerns (will it even converge). In that case it might be useful to do a sensitivity analysis: Repeat the run a number of times with alternative plausible substitution models, and see how much of an effect it has on your skyline plots (or other parameter estimates and conclusions). If there is no substantial effect, then there is no issue. If there is an effect, then you need to account for the model uncertainty.