Metrics for model comparisons

26 views
Skip to first unread message

Murilo Guimaraes

unread,
Feb 11, 2025, 3:31:15 PMFeb 11
to spOccupancy and spAbundance users
Dear list members,

I have a detection/non detection dataset I´d like to establish comparisons among different modelig approaches as follow, (i) a Binomial GLM, (ii) a non spatial occupancy model, and (iii) a spatial occupancy model.

My question relates to what metrics I should use for comparison. I´ve considered using correlation between predictions, but I sense that correlation might not be a very good metric (if predictions are all biased). I´ve also considered comparing the estimates of intercepts and predictors under the three approaches, but nothing else comes to my mind.

Does anybody have a feeling on what would be a good set of metrics to compare the different approaches?

Thanks in advance.

Murilo

Jeffrey Doser

unread,
Feb 13, 2025, 4:55:32 AMFeb 13
to Murilo Guimaraes, spOccupancy and spAbundance users
Hi Murilo,

I would say how you compare the different approaches certainly depends on what you are primarily interested in. In general, I would suggest using multiple approaches for comparing the modeling frameworks. Simple comparisons of covariate effect estimates across the approaches could be useful if you are particularly focused on inference. If focused on prediction, then you could also do some sort of out-of-sample assessment of predictive performance. For example, you could fit the model to 75% of the data points, predict at 25% of the data points, and calculate some measure of predictive performance bias on those remaining 25% of data points across the models. One caveat with this latter approach is that if you use the raw data points for assessing prediction, you will be assessing the models ability to predict the combination of occupancy and detection, not either of them separately, which is a key benefit of the occupancy framework. One useful approach to check out may be what we did in this paper when comparing models that do and don't account for imperfect detection in the case study.

Kind regards,

Jeff

--
You received this message because you are subscribed to the Google Groups "spOccupancy and spAbundance users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spocc-spabund-u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/spocc-spabund-users/9d15cc65-be26-4de5-9fdd-fcbb16f3446bn%40googlegroups.com.


--
Jeffrey W. Doser, Ph.D.
Assistant Professor
Department of Forestry and Environmental Resources
North Carolina State University
Pronouns: he/him/his

Murilo Guimaraes

unread,
Feb 14, 2025, 8:07:21 AMFeb 14
to Jeffrey Doser, David Eduardo Uribe Rivera, spOccupancy and spAbundance users
Dear David and Jeff,

Thanks for your suggestions, I will dig a little bit on that. 
Simulations were not in my plan for this project but I am aware it could be a good way to go. Maybe I´ll establish comparisons just for inference purposes and have a look at Jeff´s paper on comparing different modeling approaches.

Warm regards,
Murilo

--------------------------------------------------------------------------------------------------------------------------
Murilo Guimarães
Professor Adjunto I
Departamento de Biologia
Universidade Federal do Piauí - Brasil



Reply all
Reply to author
Forward
0 new messages