MCgof function

22 views
Skip to first unread message

Brendan Alting

unread,
Jul 16, 2025, 4:57:35 AMJul 16
to secr
Hi all, 

I was hoping to get some advice on using the newly implemented MCgof function for secr models. I understand that the resulting test statistic reflects the similarity between simulated values (from model) vs observed (from capture history). From the function documentation, a value of 0.5 is perfect fit, and approaching zero is very poor fit. 

But what if the value is close to 1? And are there any sort of boundaries or generally acceptable values that are considered to suggest the model is a good fit? for example would 0.3 be considered too poor a fit for the model to be reliable? Or is this test simply a point to be noted when using the models? 

Any guidance would be much appreciated as I understand this is an emerging area of research. 

Thanks in advance, 
Brendan

Brendan Alting

unread,
Jul 16, 2025, 5:09:17 AMJul 16
to secr
Some more context: these are the results i got from a model i ran: 

Screenshot 2025-07-16 120709.png

This would appear to me to be a very poor fit, although the combination of individuals and detectors (yik) looks ok. Would this model not be considered reliable due to these results? 

Thanks

Murray Efford

unread,
Jul 16, 2025, 6:45:35 AMJul 16
to secr
Brendan
I think you need to go to the Bayesian literature as cited in ?MCgof and elsewhere. I have not found the method useful myself.
Murray

Murray Efford

unread,
Jul 16, 2025, 7:04:54 AMJul 16
to secr
See also the background in section 13.7 of  https://murrayefford.github.io/SECRbook/, although that does not answer your particular question.

On Wednesday, July 16, 2025 at 9:09:17 PM UTC+12 brenda...@gmail.com wrote:

Brendan Alting

unread,
Jul 19, 2025, 4:45:19 AMJul 19
to secr
Thanks for getting back to me Murray, I'll look into the literature a little more. 
Reply all
Reply to author
Forward
0 new messages