It took me a long time to discover region (in this dataset) made more sense as an obs covariate. It may be hard to explain my reasoning, and of course I may be incorrect! It’s not my preferred option, but it was the only one that appeared to work with this dataset and produced results.
This dataset is regarding an invasive species in a 750 acre area, and the traps being deployed were moved around on a daily basis. The managers constantly adapted their strategies to maximize capture. This meant there were no traps which remained in a given spot (GPS) over the duration of sampling, and thus no consistently sampled sites (M). I defined five regions based on habitat differences within the 750 acres to see if the detection probability or abundance estimates were different in these different habitats. In order to guarantee I had zeros in my count data (Y=1 or Y=0), the data could not be condensed in a way that created permanent sites. The square matrix also made this a troubleshooting nightmare (so many pivot tables and reorganization experiments). These means that every site or M had to be defined as every individual trap opening, therefore the region changes with each M. This was the only method that appeared to work.
In the context of this data:
M = sites. Individual trap “checks”.
Y = count data or detection data (Y=1 or Y=0) at each of the sites (M). For this report, Y represents the catch of each trap, (0 - >100)
siteCovs: = columns for site-level covariates, things that remain constant for every site (M) per visit (J). NULL.
obsCovs: columns for observational-level covariates, things that differ per trap opening (M). Region, Method, and Effort.
Perhaps that explains it better? What do you think?
wow, not that is a fairly complex design. I must admit that I do have some concerns about whether either standard occupancy models (i.e., function occu() in unmared) or (even more) multinomial N-mixture models (e.g., function multinomPois()) are appropriate for the data produced by it. The two complications seem to be:
I think that there may be ways in which the data produced by this design can be subsumed into a design assumed by the model fitting functions in unmarked, but I am not sure.
Best regards --- Marc
To view this discussion on the web visit https://groups.google.com/d/msgid/unmarked/d93b58fe-1d23-4a43-9a10-d685e1d06279n%40googlegroups.com.
I agree, it's unusual and there has been a lot of compromise to see if this type of analysis is applicable to the situation. But at the end of the day, the goal of my report is to determine if models like those discussed in unmarked, could fit this data (I’m not planning on publishing the results). And while I have detection probability results, I believe they represent a simulation, I’m not confident the results are accurate. I'll have a lot to unwrap in my report discussion.
You're correct, the individuals are removed and frozen, so this is a removal study. I reviewed the assumptions of a removal model using multinomPois()) and I believe this dataset fits the assumptions regarding migration, and reproduction/death. It is a funky enclosed aquatic system.
Do you think there’s a way for me to attempt an abundance result, without having a site covariate? I'm going to experiment with this for another month.
I’ve included two sources which helped me understand the assumptions of the models. For anyone lost, these are informative places to start. There are loads more resources out there.
Williams, B.K., Nichols, J.D. and Conroy, M.J., 2002. Analysis and management of animal populations. Academic press.
Fiske, I. and Chandler, R., 2011. Unmarked: an R package for fitting hierarchical models of wildlife occurrence and abundance. Journal of statistical software, 43, pp.1-23.
I have created a new excel sheet using only the stationary traps. I wasn't even sure I had enough to work with but the set appears big enough to get some results, but not all I was hoping for. At this point I can now make region and method sitecovs! Effort remains obscovs. Making the switch was the right decision. My results would have been inaccurate otherwise.
Two of my region outputs appear fine but three are unable to provide SE values. I’m not sure why this is, especially when looking at the raw data. The three regions lacking SE are larger than those that output SE values. There are Nas in the dataset, as some traps have stationary data for less days than others. I’m going to try to find an answer to this, but if you have any hunches, I’d be grateful.
At the end of the day, this dataset may just be incompatible for continued modelling.
Here are two of the outputs: one with SE and one without.
Predicted SE lower upper effort region Method
1 0.01421498 0.002745395 0.009726411 0.02073157 1 Marsh Shrimp
Predicted SE lower upper effort region Method
1 0.01202877 NaN NaN NaN 1 NorthernExtent Shrimp
thanks for the update. I am pretty sure that using only the data from stationary traps, with at least some repeated measurements, is the right decision. Now these missing SEs are usually a sign that a data set is very small or that there are very few detections for a species. Or that a model is too complex given the amount of information in the data. Sometimes, it’s also a consequence of the optimization not finding the global maximum and that may then be mended by using better starting values.
Best regards -- Marc
To view this discussion on the web visit https://groups.google.com/d/msgid/unmarked/7c1fbeea-f47d-4c8d-9bfb-c6864742b62dn%40googlegroups.com.