Hi all,
I have produced 3 seasonal models relating species occurrence (binomial) to a mix of continuous and categorical environmental variables. I am presenting the AUC values from the evaluation output (reviewers/readers will be on familiar ground here) and I initially thought that the pseudo xR2 values would be another metric that reviewers/readers could intuitively grasp.
But I'm not so sure about that after doing some reading on pseudo R2. As I understand it there are more than a few different pseudo R2 metrics out there and they can produce quite different values on the same data set (~0.3-0.5 for one example). And it appears that the pseudo R2 in HN is unique to HN?
It seems to me that the xR2 values are fine for comparing fit among my models, but may actually mislead readers who expect the pseudo xR2 to have values similar to a traditional R2.
So I have two questions: for the pseudo xR2 in HN, what method is used (who should I cite for the method), and how might pseudo xR2 values compare to traditional R2 other than that they are correlated?
Secondly, how can I best present the model evaluation output from the models so that reviewers/readers can judge if the models are garbage or not without resorting to hand waving (supporting evidence from other analyses that are consistent with model results & general consistency among models with different neighborhood sizes)? I especially ask this in light of deserved push back/growing focus on the too frequent lack of including transparent model evaluations in papers using an AIC approach. I expect reviewers will not be familiar with NPMR and they will really drill down on the model evaluations (as they should).
My xR2 values range from 0.19 to 0.3 (and result from using Random samples vs. Present samples rather than present/absent samples with relatively high errors of Commission = 22% vs Omission = 0.02%).
Thank you,
Ken