--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Hi all,
I find this an interesting discussion. Basically, I feel that stacking is an OK, but approximate way of dealing with a sparse-data situation so that you can fit the model in unmarked. However, you do not fully accommodate the dependence structure due to repeated measurements at the same sites, that is, yes, you are committing pseudo-replication in the sense of Hurlbert (1984). In analogy with other cases of where you have unmodelled spatial or temporal dependency in ecology (cf. models with or without spatial autocorrelation) I would say that you don’t get biased parameter estimates, but simply too small SEs. – It would perhaps be better to use BUGS to fully incorporate the dependencies by adding a site random effect and it might be an interesting exercise to compare the inferences between the two approaches. But I don’t think that you have to.
An extreme case of stacking is when you have no spatial replication at all, only temporal replication, e.g. 9 years of data for a single site. Check some recent publications by Yuichi Yamaura and colleagues on community occupancy models where they fit them in exactly this setting. For instance, in Journal of Applied Ecology 2011, 48, 67–75 they write « We applied our model to 9-years of bird monitoring data after a forest fire at a single site (N36_ 35¢ 32¢¢, E140_ 37¢ 20¢¢) by substituting time (e.g. years) for space (e.g. sites).»
I have run some limited simulations for the related N-mixture model to convince myself that the estimates for a time-for-space substitution are fine. And they were. – BTW, simulation is SUCH a great tool to investigate such questions for yourself.
Regards -- Marc
--
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+unsubscribe@googlegroups.com.
Thank you both!
Thank you, Jay
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Dear Aaron,
(Max 4 visits is not unusual at all.) And no, putting yearly data sideways will not give you the right answers, because you will then assume closure over the entire 3-year period. OK, this may not be disastrous, but you will then estimate some sort of probability of use rather than probability of permanent presence. Unless you have strong reason not to, I find stacking the better way to deal with multi-season data in a closed model.
Best regards -- Marc
I didn't see a response to Aaron's question about his specific setup. I did what he is describing—I have 474 three-visit samples from 187 point count stations (points), assuming closure among the three visits and using pcount. I'm stacking because I have different levels of years-post-harvest (YPH) among points (some only have data from one year post while others have data from two or three years post). I'm interested in the effect YPH has on abundance. If I combined points with YPH as you all are talking about, I would have 474 "sites" and my models don't converge. So I ran my models with my 187 points stacked (into 474 samples) and used a covariate for YPH. So the two are linked, but my "sites" repeat for however many years.For example, point count "KY_E1" would have three rows, tied to three different levels of SiteCov YPH, rather than having three different sites: "KY_E1_1", "KY_E1_2", "KY_E1_3". And then point count "SJ_E1" only has one YPH so it has one row.Is this not appropriate? Thank you.
On Wed, Mar 20, 2019 at 6:31 AM Aaron Grade <agra...@gmail.com> wrote:Hello,On this same topic (using single-season occupancy model for multiple years), I have a limited amount of visits per year (4 or less) for 3 years, would it be appropriate to treat each site as the same site (rather than stacking by site-year) and then adding year as a covariate to detection and occupancy, or is that entering into the territory of pseudoreplication/severe model assumption violations? If not, would adding year as a random effect in a mixed model framework reduce the pseudoreplication issues?Thank you,Aaron
You received this message because you are subscribed to a topic in the Google Groups "unmarked" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/unmarked/OHkk98y09Zo/unsubscribe.
To unsubscribe from this group and all its topics, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
m1 <- occu(data = CAT_UMF, ~ 1 ~ 1)
m2 <- occu(data = CAT_UMF, ~ 1 ~ 1 + YEAR)
m3 <- occu(data = CAT_UMF, ~ 1 ~ 1 + BDE)
m4 <- occu(data = CAT_UMF, ~ 1 ~ 1 + BPR)
m5 <- occu(data = CAT_UMF, ~ 1 ~ 1 + IMP)
m6 <- occu(data = CAT_UMF, ~ 1 ~ 1 + FPA)
m7 <- occu(data = CAT_UMF, ~ 1 ~ 1 + YEAR + BDE)
m8 <- occu(data = CAT_UMF, ~ 1 ~ 1 + YEAR + BPR)
m9 <- occu(data = CAT_UMF, ~ 1 ~ 1 + YEAR + IMP)
m10 <- occu(data = CAT_UMF, ~ 1 ~ 1 + YEAR + FPA)
m11 <- occu(data = CAT_UMF, ~ 1 ~ 1 + BDE + BPR)
m12 <- occu(data = CAT_UMF, ~ 1 ~ 1 + BDE + IMP)
m13 <- occu(data = CAT_UMF, ~ 1 ~ 1 + BDE + FPA)
m14 <- occu(data = CAT_UMF, ~ 1 ~ 1 + BPR + IMP)
m15 <- occu(data = CAT_UMF, ~ 1 ~ 1 + BPR + FPA)
m16 <- occu(data = CAT_UMF, ~ 1 ~ 1 + IMP + FPA)
To unsubscribe from this group and stop receiving emails from it, send an email to unma...@googlegroups.com.
--
For more options, visit https://groups.google.com/d/optout.
You received this message because you are subscribed to a topic in the Google Groups "unmarked" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/unmarked/OHkk98y09Zo/unsubscribe.
To unsubscribe from this group and all its topics, send an email to unma...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To unsubscribe from this group and stop receiving emails from it, send an email to unma...@googlegroups.com.
Dear Eric,
I would do this:- from n sites surveyed over multiple years, sample with replacement n sites. Hence, some sites will appear in the bootstrap data multiple times and some sites not at all.- then do the stacking and fit the model, save the MLEs (no need for SEs --- speeds things up)- repeat this 1000 or so times and take the SD as the bootstrap SE and a 95% percentile as a bootstrapped 95% CI
That's what we do in the upcoming AHM2 book too.
Best regards --- Marc
From: 'eric....@nasa.gov' via unmarked [unma...@googlegroups.com]
To unsubscribe from this group and stop receiving emails from it, send an email to unma...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to unma...@googlegroups.com.
Dear Eric,
I would do this:- from n sites surveyed over multiple years, sample with replacement n sites. Hence, some sites will appear in the bootstrap data multiple times and some sites not at all.- then do the stacking and fit the model, save the MLEs (no need for SEs --- speeds things up)- repeat this 1000 or so times and take the SD as the bootstrap SE and a 95% percentile as a bootstrapped 95% CI
That's what we do in the upcoming AHM2 book too.
Best regards --- Marc
From: 'eric....@nasa.gov' via unmarked [unma...@googlegroups.com]
To unsubscribe from this group and stop receiving emails from it, send an email to unma...@googlegroups.com.
Dear all,
I hope you're doing well. I recently read the above helpful discussion that reminded me of a problem I'm facing. Like Mike, I have lots of camera trap data from different years (2016 to 2022), but the sampling periods are inconsistent. Some traps were deployed for a whole year, while others were only for a few months and survey seasons were also different. My primary research objective is to analyze multi-year data into a single-season N-mixture/Royle-Nicholas model since I am not interested in a dynamic model.
Currently, I am thinking between two methods: using a "stacked" approach OR treating the year as a random effect by using ‘ubms’ package in R. The stacked function seems great, but it might not work well because my data was collected at different times of the year. On the other hand, treating the year as a random effect might be better, but I'm not sure how to do it effectively.
I would really appreciate your advice on which method would be best for my situation and any suggestions or comments you provided are highly appreciated. Your help would mean a lot to me.
Thanks,
Lwin