--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.



Dear Alex,
re. CIs for a ZIP fit of the Nmix, you can use a predict function in Marc Mazerolle’s AICcmodavg package (pointed out to me by Andy Royle).
Regards -- Marc
Question to Alex: so you no longer got the infinite initial in vmmin error when fitting the model after you updated unmarked ? I just did and I still get it ....
hi Richard,
I am using R version 2.15.2 (2012-10-26) -- "Trick or Treat" and the latest version of the packages. Perhaps I should upgrade R then ?
Regards --- Marc
Dear Alex,
re. AICcmodavg: I haven’t tried out yet ...
Regards --- Marc

Hi Marc, what versions of unmarked and RcppArmadillo are you using?�
Regarding the delta method for the ZIP model, I'm afraid I don't have time to write the code right now, but it would be great if someone wanted to contribute code.�
Richard
On Fri, Aug 9, 2013 at 4:21 AM, Kery Marc <marc...@vogelwarte.ch> wrote:
Question to Alex: so you no longer got the infinite initial in vmmin error when fitting the model after you updated unmarked ? I just did and I still get it ....
�
�
�
Von: unma...@googlegroups.com [mailto:unma...@googlegroups.com] Im Auftrag von Kery Marc
Gesendet: Freitag, 9. August 2013 10:17
An: 'unma...@googlegroups.com'
Betreff: AW: [unmarked] pcount
�
Dear Alex,
�
re. CIs for a ZIP fit of the Nmix, you can use a predict function in Marc Mazerolle�s AICcmodavg package (pointed out to me by Andy Royle).
�
Regards� --� Marc
�
�
�
Von: unma...@googlegroups.com [mailto:unma...@googlegroups.com] Im Auftrag von Alex Anderson
Gesendet: Freitag, 9. August 2013 09:37
An: unma...@googlegroups.com
Betreff: Re: [unmarked] pcount
�
Dear all,
Thanks so much for sharing helpful comments.� Following Richards suggestions, I've used the R engine and succeeded in having pcount fit some Nmix models with obscovs to my bird survey data from Australian montane rainforests.� Today, after a package update, I am even having success with the C engine (Thanks to Richard?).� As Marc and others have pointed out before, it is possible to get models with a good AIC performance and GOF that nonetheless are way above the estimates one would expect from the data.� I have a set of models in which a ZIP model with no detection covariates out-performs NB, but only just.
���������������������������� nPars���� AIC� delta��� AICwt cumltvWt
fm7:~1~T+T^2+Pptn,ZIP������������ 6 2909.32�� 0.00� 4.5e-01���� 0.45
fm13~ta~T+T^2,NB����������������� 6 2910.24�� 0.92� 2.8e-01���� 0.74
fm20:~Wt+ta~T+T^2,ZIP������������ 8 2911.83�� 2.50� 1.3e-01���� 0.87
fm22:~Wt+Sn+ta~T+T^2+Pptn,ZIP���� 8 2911.83�� 2.50� 1.3e-01���� 0.99
fm15:~Sn~T+T^2,NB���������������� 6 2920.72� 11.40� 1.5e-03���� 1.00
fm5:~1~T+T^2,NB������������������ 5 2920.86� 11.54� 1.4e-03���� 1.00
fm11~Wn~T+T^2,NB����������������� 6 2921.48� 12.16� 1.0e-03���� 1.00
fm18:~St~T+T^2,ZIP��������������� 6 2921.53� 12.21� 1.0e-03���� 1.00
fm9:~Wt~T+T^2,NB����������������� 6 2921.95� 12.63� 8.2e-04���� 1.00
fm8:~1~T+T^2+Pptn,NB������������� 6 2948.27� 38.94� 1.6e-09���� 1.00
fm14:~ta~T+T^2,ZIP��������������� 6 2961.14� 51.81� 2.5e-12���� 1.00
fm21:~Wt+Sn+ta~T+T^2+Pptn,NB����� 8 2962.04� 52.71� 1.6e-12���� 1.00
fm6:~1~T+T^2,ZIP����������������� 5 2969.38� 60.06� 4.1e-14���� 1.00
fm12:~Wn~T+T^2,ZIP��������������� 6 2969.71� 60.39� 3.5e-14���� 1.00
fm19:~Wt+ta~T+T^2,NB������������� 6 2970.11� 60.79� 2.9e-14���� 1.00
fm16:~Sn~T+T^2,ZIP��������������� 6 2970.35� 61.02� 2.5e-14���� 1.00
fm17:~St~T+T^2,NB���������������� 6 2970.35� 61.02� 2.5e-14���� 1.00
fm10:~Wt~T+T^2,ZIP��������������� 6 2971.24� 61.91� 1.6e-14���� 1.00
fm3:~1~T,NB���������������������� 4 3134.72 225.40� 5.1e-50���� 1.00
fm4:~1~T,ZIP��������������������� 4 3170.93 261.61� 7.0e-58���� 1.00
fm2:~1~T,P����������������������� 3 3464.14 554.82 1.5e-121���� 1.00
fm1:~1~1,P����������������������� 2 3591.60 682.28 3.2e-149���� 1.00
(site covariates: "T" = mean annual temperature, "Pptn" =mean annual precipitation,
obscovs: "ta" = temp anomaly of the survey (= survey temp minus mean annual temperature),
"Wt" = survey wetness (= mostly canopy drip in the rainforest), "Sn" = Season, "Wn" = wind,
"St" = start time)
Model fits are good for both these top models:
#fm7:~1~T+T^2,ZIP
Call: parboot(object = fm7, statistic = fitstats, nsim = 100, report = 1)
Parametric Bootstrap Statistics:
�������������� t0 mean(t0 - t_B) StdDev(t0 - t_B) Pr(t_B > t0)
SSE��������� 3329�������� -111.1����������� 620.4������ 0.5149
Chisq������� 5819�������� 2011.9���������� 1046.9������ 0.0396
freemanTukey� 613���������� 52.2������������ 43.9������ 0.0891
#fm13~ta~T+T^2,NB
Call: parboot(object = fm13, statistic = fitstats, nsim = 100, report = 1)
Parametric Bootstrap Statistics:
�������������� t0 mean(t0 - t_B) StdDev(t0 - t_B) Pr(t_B > t0)
SSE��������� 3293�������� -529.9����������� 761.4������ 0.7426
Chisq������� 5529�������� 1690.8���������� 1268.7������ 0.0396
freemanTukey� 603���������� 13.6������������ 39.7������ 0.3465
fm20:~Wt+ta~T+T^2,ZIP
Call: parboot(object = fm20, statistic = fitstats, nsim = 100, report = 1)
Parametric Bootstrap Statistics:
�������������� t0 mean(t0 - t_B) StdDev(t0 - t_B) Pr(t_B > t0)
SSE��������� 3279�������� -473.4����������� 744.1������ 0.7525
Chisq������� 5460�������� 1770.4���������� 1058.4������ 0.0594
freemanTukey� 600���������� 19.6������������ 42.9������ 0.3168
but the estimates they give vary widely! e.g., in this case, where the observed max(count)� = 12 individuals, the maximum estimate from a NegBin (top figure) is around 50 (!), while that from ZIP with obscovs is around 13.8 (middle figure), and around 20 without obscovs (bottom figure).� detectability is reasonably good in this species, despite the habitat context� Lewins' Honeyeater, a medium sized, vocal passerine with a loud and distinctive call)
As pointed out previously by Richard, this brings me up against a current limitation in the unmarked code for predict where the prior is a ZIP function:� At the moment there are no confidence intervals calculated for a ZIP function.� Confidence intervals for zero-inflated functions (Neg Bin also) (and perhaps the quicker C++ code to run them!?) would be a fantastic addition to unmarked.. In the meantime, is there any example code out there for a recommended way to achieve this outside unmarked in the mean time?
Thanks again for your helpful comments.
regards
Alex
On 8/08/13 10:40 PM, Kery Marc wrote:
Dear Murray,
yes, that's the paper. And several people, including myself, have fitted Nmix models with Poisson or NegBin priors for abundance and got unrealistic abundance estimates from the latter, even when a traditional GOF test (e.g., based on Chisquare) indicated the model fit. (As an aside, this is a good point to remember: whether a model fits or not does not necessarily mean anything in terms of whether it is useful.)
I find this a difficult problem: to decide which mixture to adopt for N in the model. Quite often, we find that a Poisson mixture does not fit, even when we add a couple of covariates. Since traditional wisdom says we should not base our inference on a model that does not pass some GOF test, we should therefore try some other mixture distribution, e.g., the ZIP or the NegBin which are currently implenented in unmarked. Others that have been fit in the context of a Nmix model are the Poisson log-normal or a DPP (for the latter, see the 2008 Biometrics paper by Dorazio et al.). There is clearly scope for research here.
Kind regards� --� Marc
From: unma...@googlegroups.com [unma...@googlegroups.com] on behalf of Murray Efford [murray...@otago.ac.nz]
Sent: 08 August 2013 14:03
To: unma...@googlegroups.com
Subject: RE: [unmarked] pcount
Hi Marc et al
Would that be Joseph, Elkin, Martin & Possingham (2009) Modeling abundance using N-mixture models: the importance of considering ecological mechanisms. Ecol. Appl. 19:631-642?
It seems to fit. I'm curious how we deal convincingly with strong model-dependence in these cases. Perhaps we can rely on the accumulated wisdom of practitioners, but that is a little hard to justify to statisticians!
Murray
From: unma...@googlegroups.com [unma...@googlegroups.com] on behalf of Kery Marc [marc...@vogelwarte.ch]
Sent: Thursday, 8 August 2013 10:36 p.m.
To: unma...@googlegroups.com
Subject: RE: [unmarked] pcount
Dear Alejandro,
the NegBin often fits, but can produce unrealistically high estimates of N; see paper by Johnson et al sometime back in 2009 or so. I would clearly not use it in this case. What about the ZIP ? Does this produce reasonable estimates ?
Re. the error message:
Error in optim(starts, nll, method = method, hessian = se, ...) :
� initial value in 'vmmin' is not finite
Totally inexplicable to me (and to Richard Chandler as well), for about 2 months, I have had the same problem when fitting Nmix models with NAs in the data set, even with the pcount example and the mallard data set. See here:
>�� # Real data
>����� data(mallard)
>����� mallardUMF <- unmarkedFramePCount(mallard.y, siteCovs = mallard.site,
+����� obsCovs = mallard.obs)
>����� (fm.mallard <- pcount(~ ivel+ date + I(date^2) ~ length + elev + forest, mallardUMF, K=30))
Fehler in optim(starts, nll, method = method, hessian = se, ...) :
� Anfangswert in 'vmmin' ist nicht endlich
Zus�tzlich: Warnmeldung:
4 sites have been discarded because of missing data.
When you fill all NAs in the covariate data, the problem goes away. Very strange.
Kind regards� --� Marc
______________________________________________________________�
Marc K�ry
�
Swiss Ornithological Institute | Seerose 1 | CH-6204 Sempach | Switzerland
______________________________________________________________
�
*** Introduction to Bayesian statistical modeling: K�ry (2010), Introduction to WinBUGS for Ecologists, Academic Press; see www.mbr-pwrc.usgs.gov/pubanalysis/kerybook
*** Book on Bayesian statistical modeling: K�ry & Schaub (2012), Bayesian Population Analysis using WinBUGS, Academic Press; see www.vogelwarte.ch/bpa
*** Upcoming workshops: http://www.phidot.org/forum/viewforum.php?f=8
From: unma...@googlegroups.com [unma...@googlegroups.com] on behalf of alejandro....@gmail.com [alejandro....@gmail.com]
Sent: 01 August 2013 06:39
To: unma...@googlegroups.com
Subject: [unmarked] pcount
Hi All,
I am in the process of analysing some long-term monitoring data from audio-visual counts of rainforest birds.� My data are spatially and temporally replicated, with "points" in "sites" distributed across a broad environmental gradient (elevation, proxied here for simplicity with mean annual temperature ("MATemp")) and repeated several times a year per site for about 10 years.�
my very holey count data look like this
:
�������� count.1 count.2 count.3 count.4 count.5 count.6 count.7 count.8 count.9 count.10 count.11 count.12 count.13
KUBC3��������� 0���� � 2 ���� NA����� NA����� NA����� NA����� NA����� NA����� NA������ NA������ NA������ NA������ NA
KUBC4��������� 0����� NA����� NA����� NA����� NA����� NA����� NA����� NA����� NA������ NA������ NA������ NA������ NA
KUBC5��������� 1 ���� NA����� NA����� NA����� NA����� NA����� NA����� NA����� NA������ NA������ NA������ NA������ NA
KUBC6 ������ � 0����� NA����� NA����� NA����� NA����� NA����� NA����� NA����� NA������ NA������ NA������ NA������ NA
TU8A2��������� 0����� NA����� NA����� NA����� NA����� NA����� NA����� NA����� NA������ NA������ NA������ NA������ NA
AU10A5�������� 3������ 4������ 0������ 0������ 1������ 4������ 0������ 1������ 5������� 0������� 3������ NA������ NA
AU10A6�������� 2������ 0������ 0������ 0������ 0������ 3������ 4������ 2������ 2������� 4������� 3������ NA������ NA
AU10A3�������� 0������ 0������ 1������ 3������ 2������ 3������ 0������ 5������ 1������� 2������� 2������� 1������� 1
AU10A2�������� 0������ 1������ 0������ 0������ 2������ 0������ 2������ 1������ 0������ NA������ NA������ NA������ NA
siteCovs look like this
�������� MATemp annual_pptn12
KUBC3����� 19.4��������� 2120
KUBC4����� 19.3��������� 2144
KUBC5����� 19.3��������� 2144
KUBC6����� 19.3��������� 2144
KUBC2����� 19.3��������� 2153
KUBA1����� 20.0��������� 1922
obsCovs look like this
����� wind.1 wet.1 temp_anomaly.1 month.1 start2.1 wind.2 wet.2 temp_anomaly.2 month.2 start2.2 wind.3 wet.3
KUBC3����� 0���� 1����������� 1.6������ 3���� 8.10���� NA��� NA������������ NA����� NA������ NA���� NA��� NA
KUBC4����� 0���� 1���������� -1.3����� 10���� 7.13����� 1���� 1���������� -0.3������ 3���� 6.40���� NA��� NA
KUBC5����� 0���� 1����������� 1.2������ 3���� 7.35���� NA��� NA������������ NA����� NA������ NA���� NA��� NA
KUBC6����� 2���� 1����������� 0.2������ 3���� 8.30���� NA��� NA������������ NA����� NA������ NA���� NA��� NA
KUBC2����� 0���� 1����������� 1.7������ 3���� 7.25����� 2���� 1���������� -1.3������ 3���� 6.27���� NA��� NA
KUBA1����� 2���� 1����������� 1.0����� 10���� 8.25���� NA��� NA������������ NA����� NA������ NA���� NA��� NA
At the moment I am focussing on getting some reasonable models fitted incorporating covariates of detection and abundance, so ignoring the temporal component for now.� I am able to return some good fits with a very simple quadratic term for temperature, looking like this (see above, for some reason it wants to display there...).
Based on AIC, this model performs better than one without covariates, without a quadratic terms, or with alternative error distributions...
���������������������� nPars���� AIC� delta��� AICwt cumltvWt
lam(I(MATemp^2))p(.)NB���� 4 3138.53�� 0.00� 1.0e+00���� 1.00
lam(MATemp)p(.)NB��������� 4 3163.51� 24.98� 3.8e-06���� 1.00
lam(MATemp)p(.)ZIP�������� 4 3213.17� 74.65� 6.2e-17���� 1.00
lam(I(MATemp^2))p(.)P����� 3 3368.83 230.30� 9.8e-51���� 1.00
lam(MATemp)p(.)P���������� 3 3504.11 365.58� 4.1e-80���� 1.00
lam(.)p(.)P��������������� 2 3633.74 495.22 2.9e-108���� 1.00
and has reasonable bootstrap support,
Call: parboot(object = fm6, statistic = fitstats, nsim = 100, report = 1)
Parametric Bootstrap Statistics:
�������������� t0 mean(t0 - t_B) StdDev(t0 - t_B) Pr(t_B > t0)
SSE��������� 3892��������� -1214���������� 1524.3������ 0.7624
Chisq������� 8212���������� 2950���������� 1126.6������ 0.0297
freemanTukey� 807����������� -70������������ 96.5������ 0.7030
�But.... it returns max abundance estimates higher than 30, nearly three times the maximum recorded count (~12 individuals) (when plotted as above on the original scale).� From a search of these pages and others, this could result from very low detectability estimates, and indeed, I have a lot of zeros (though this model also outperformed its ZIP equivalents) but when I try to include observation covariates to absorb some of this variation, e.g. including a covariate for start time of the survey:
> (fm8 <- pcount(~start ~I(MATemp^2), spp.umf, starts = c(1,0,0,0,0), K = 100,mixture = "NB"))
I cannot get past this error message
Error in optim(starts, nll, method = method, hessian = se, ...) :
� initial value in 'vmmin' is not finite
I have tried rescaling this covariate, but as it is catergorical this was possibly not even appropriate.� I have five other covariates, month, wind, wet, and even temperature anomaly (the deviation of the survey from the mean temp at a site), each rescaled and raw, to no avail.� there is wide variation in the number of visits to my sites, ranging from only 3 visits to 18, by when I restrict occaisions to 3 max, I get the same error.� I have also tried tweaking starting values, but I am not sure I know how to choose reasonable ones in this case...� Is pcount even appropriate n the case of data like these?
Any thoughts would be much appreciated!
regards,
Alex
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
�
�
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
�
�
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
�
��
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
--
Richard ChandlerUniversity of GeorgiaWarnell School of Forestry and Natural Resources
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
�
�
Hi All,
Until recently I have been getting good results fitting models in pcount open to bird survey data. I then relaxed some of my data filtering to allow more variation in survey conditions, thinking that this may help to the improve fit of the detectability component of my models. Until this point I was getting good quadratic fits to my data (see plots below), (and able to predict from ZIP fits using AICcmodavg). Now I seem to be getting very flat or sometime even concave fits to the same data!
I have gone back to my original data now to test, and the results are similar.Adding many observation covariates (overfitting essentially) makes little difference, as does changing the family (P, NB) Have I missed something? Attached is a zipped archive with script and files necessary to recreate the models and plot. Any thoughts would be much appreciated.
best regards
Alex
On 9/08/13 1:34 PM, Richard Chandler wrote:
Hi Marc, what versions of unmarked and RcppArmadillo are you using?
Regarding the delta method for the ZIP model, I'm afraid I don't have time to write the code right now, but it would be great if someone wanted to contribute code.
Richard
On Fri, Aug 9, 2013 at 4:21 AM, Kery Marc <marc...@vogelwarte.ch> wrote:
Question to Alex: so you no longer got the infinite initial in vmmin error when fitting the model after you updated unmarked ? I just did and I still get it ....
Von: unma...@googlegroups.com [mailto:unma...@googlegroups.com] Im Auftrag von Kery Marc
Gesendet: Freitag, 9. August 2013 10:17
An: 'unma...@googlegroups.com'
Betreff: AW: [unmarked] pcount
Dear Alex,
re. CIs for a ZIP fit of the Nmix, you can use a predict function in Marc Mazerolle’s AICcmodavg package (pointed out to me by Andy Royle).
Regards -- Marc
Von: unma...@googlegroups.com [mailto:unma...@googlegroups.com] Im Auftrag von Alex Anderson
Gesendet: Freitag, 9. August 2013 09:37
An: unma...@googlegroups.com
Betreff: Re: [unmarked] pcount
Dear all,
Thanks so much for sharing helpful comments. Following Richards suggestions, I've used the R engine and succeeded in having pcount fit some Nmix models with obscovs to my bird survey data from Australian montane rainforests. Today, after a package update, I am even having success with the C engine (Thanks to Richard?). As Marc and others have pointed out before, it is possible to get models with a good AIC performance and GOF that nonetheless are way above the estimates one would expect from the data. I have a set of models in which a ZIP model with no detection covariates out-performs NB, but only just.
nPars AIC delta AICwt cumltvWt
fm7:~1~T+T^2+Pptn,ZIP 6 2909.32 0.00 4.5e-01 0.45
fm13~ta~T+T^2,NB 6 2910.24 0.92 2.8e-01 0.74
fm20:~Wt+ta~T+T^2,ZIP 8 2911.83 2.50 1.3e-01 0.87
fm22:~Wt+Sn+ta~T+T^2+Pptn,ZIP 8 2911.83 2.50 1.3e-01 0.99
fm15:~Sn~T+T^2,NB 6 2920.72 11.40 1.5e-03 1.00
fm5:~1~T+T^2,NB 5 2920.86 11.54 1.4e-03 1.00
fm11~Wn~T+T^2,NB 6 2921.48 12.16 1.0e-03 1.00
fm18:~St~T+T^2,ZIP 6 2921.53 12.21 1.0e-03 1.00
fm9:~Wt~T+T^2,NB 6 2921.95 12.63 8.2e-04 1.00
fm8:~1~T+T^2+Pptn,NB 6 2948.27 38.94 1.6e-09 1.00
fm14:~ta~T+T^2,ZIP 6 2961.14 51.81 2.5e-12 1.00
fm21:~Wt+Sn+ta~T+T^2+Pptn,NB 8 2962.04 52.71 1.6e-12 1.00
fm6:~1~T+T^2,ZIP 5 2969.38 60.06 4.1e-14 1.00
fm12:~Wn~T+T^2,ZIP 6 2969.71 60.39 3.5e-14 1.00
fm19:~Wt+ta~T+T^2,NB 6 2970.11 60.79 2.9e-14 1.00
fm16:~Sn~T+T^2,ZIP 6 2970.35 61.02 2.5e-14 1.00
fm17:~St~T+T^2,NB 6 2970.35 61.02 2.5e-14 1.00
fm10:~Wt~T+T^2,ZIP 6 2971.24 61.91 1.6e-14 1.00
fm3:~1~T,NB 4 3134.72 225.40 5.1e-50 1.00
fm4:~1~T,ZIP 4 3170.93 261.61 7.0e-58 1.00
fm2:~1~T,P 3 3464.14 554.82 1.5e-121 1.00
fm1:~1~1,P 2 3591.60 682.28 3.2e-149 1.00
(site covariates: "T" = mean annual temperature, "Pptn" =mean annual precipitation,
obscovs: "ta" = temp anomaly of the survey (= survey temp minus mean annual temperature),
"Wt" = survey wetness (= mostly canopy drip in the rainforest), "Sn" = Season, "Wn" = wind,
"St" = start time)
Model fits are good for both these top models:
#fm7:~1~T+T^2,ZIP
Call: parboot(object = fm7, statistic = fitstats, nsim = 100, report = 1)
Parametric Bootstrap Statistics:
t0 mean(t0 - t_B) StdDev(t0 - t_B) Pr(t_B > t0)
SSE 3329 -111.1 620.4 0.5149
Chisq 5819 2011.9 1046.9 0.0396
freemanTukey 613 52.2 43.9 0.0891
#fm13~ta~T+T^2,NB
Call: parboot(object = fm13, statistic = fitstats, nsim = 100, report = 1)
Parametric Bootstrap Statistics:
t0 mean(t0 - t_B) StdDev(t0 - t_B) Pr(t_B > t0)
SSE 3293 -529.9 761.4 0.7426
Chisq 5529 1690.8 1268.7 0.0396
freemanTukey 603 13.6 39.7 0.3465
fm20:~Wt+ta~T+T^2,ZIP
Call: parboot(object = fm20, statistic = fitstats, nsim = 100, report = 1)
Parametric Bootstrap Statistics:
t0 mean(t0 - t_B) StdDev(t0 - t_B) Pr(t_B > t0)
SSE 3279 -473.4 744.1 0.7525
Chisq 5460 1770.4 1058.4 0.0594
freemanTukey 600 19.6 42.9 0.3168
but the estimates they give vary widely! e.g., in this case, where the observed max(count) = 12 individuals, the maximum estimate from a NegBin (top figure) is around 50 (!), while that from ZIP with obscovs is around 13.8 (middle figure), and around 20 without obscovs (bottom figure). detectability is reasonably good in this species, despite the habitat context Lewins' Honeyeater, a medium sized, vocal passerine with a loud and distinctive call)
As pointed out previously by Richard, this brings me up against a current limitation in the unmarked code for predict where the prior is a ZIP function: At the moment there are no confidence intervals calculated for a ZIP function. Confidence intervals for zero-inflated functions (Neg Bin also) (and perhaps the quicker C++ code to run them!?) would be a fantastic addition to unmarked.. In the meantime, is there any example code out there for a recommended way to achieve this outside unmarked in the mean time?
Thanks again for your helpful comments.
regards
Alex
On 8/08/13 10:40 PM, Kery Marc wrote:
Dear Murray,
yes, that's the paper. And several people, including myself, have fitted Nmix models with Poisson or NegBin priors for abundance and got unrealistic abundance estimates from the latter, even when a traditional GOF test (e.g., based on Chisquare) indicated the model fit. (As an aside, this is a good point to remember: whether a model fits or not does not necessarily mean anything in terms of whether it is useful.)
I find this a difficult problem: to decide which mixture to adopt for N in the model. Quite often, we find that a Poisson mixture does not fit, even when we add a couple of covariates. Since traditional wisdom says we should not base our inference on a model that does not pass some GOF test, we should therefore try some other mixture distribution, e.g., the ZIP or the NegBin which are currently implenented in unmarked. Others that have been fit in the context of a Nmix model are the Poisson log-normal or a DPP (for the latter, see the 2008 Biometrics paper by Dorazio et al.). There is clearly scope for research here.
Kind regards -- Marc
From: unma...@googlegroups.com [unma...@googlegroups.com] on behalf of Murray Efford [murray...@otago.ac.nz]
Sent: 08 August 2013 14:03
To: unma...@googlegroups.com
Subject: RE: [unmarked] pcount
Hi Marc et al
Would that be Joseph, Elkin, Martin & Possingham (2009) Modeling abundance using N-mixture models: the importance of considering ecological mechanisms. Ecol. Appl. 19:631-642?
It seems to fit. I'm curious how we deal convincingly with strong model-dependence in these cases. Perhaps we can rely on the accumulated wisdom of practitioners, but that is a little hard to justify to statisticians!
Murray
From: unma...@googlegroups.com [unma...@googlegroups.com] on behalf of Kery Marc [marc...@vogelwarte.ch]
Sent: Thursday, 8 August 2013 10:36 p.m.
To: unma...@googlegroups.com
Subject: RE: [unmarked] pcount
Dear Alejandro,
the NegBin often fits, but can produce unrealistically high estimates of N; see paper by Johnson et al sometime back in 2009 or so. I would clearly not use it in this case. What about the ZIP ? Does this produce reasonable estimates ?
Re. the error message:
Error in optim(starts, nll, method = method, hessian = se, ...) :
initial value in 'vmmin' is not finite
Totally inexplicable to me (and to Richard Chandler as well), for about 2 months, I have had the same problem when fitting Nmix models with NAs in the data set, even with the pcount example and the mallard data set. See here:
> # Real data
> data(mallard)
> mallardUMF <- unmarkedFramePCount(mallard.y, siteCovs = mallard.site,
+ obsCovs = mallard.obs)
> (fm.mallard <- pcount(~ ivel+ date + I(date^2) ~ length + elev + forest, mallardUMF, K=30))
Fehler in optim(starts, nll, method = method, hessian = se, ...) :
Anfangswert in 'vmmin' ist nicht endlich
Zusätzlich: Warnmeldung:
4 sites have been discarded because of missing data.
When you fill all NAs in the covariate data, the problem goes away. Very strange.
Kind regards -- Marc
______________________________________________________________
Marc Kéry
Swiss Ornithological Institute | Seerose 1 | CH-6204 Sempach | Switzerland
______________________________________________________________
*** Introduction to Bayesian statistical modeling: Kéry (2010), Introduction to WinBUGS for Ecologists, Academic Press; see www.mbr-pwrc.usgs.gov/pubanalysis/kerybook
*** Book on Bayesian statistical modeling: Kéry & Schaub (2012), Bayesian Population Analysis using WinBUGS, Academic Press; see www.vogelwarte.ch/bpa
*** Upcoming workshops: http://www.phidot.org/forum/viewforum.php?f=8
From: unma...@googlegroups.com [unma...@googlegroups.com] on behalf of alejandro....@gmail.com [alejandro....@gmail.com]
Sent: 01 August 2013 06:39
To: unma...@googlegroups.com
Subject: [unmarked] pcount
Hi All,
I am in the process of analysing some long-term monitoring data from audio-visual counts of rainforest birds. My data are spatially and temporally replicated, with "points" in "sites" distributed across a broad environmental gradient (elevation, proxied here for simplicity with mean annual temperature ("MATemp")) and repeated several times a year per site for about 10 years.
my very holey count data look like this
:
count.1 count.2 count.3 count.4 count.5 count.6 count.7 count.8 count.9 count.10 count.11 count.12 count.13
KUBC3 0 2 NA NA NA NA NA NA NA NA NA NA NA
KUBC4 0 NA NA NA NA NA NA NA NA NA NA NA NA
KUBC5 1 NA NA NA NA NA NA NA NA NA NA NA NA
KUBC6 0 NA NA NA NA NA NA NA NA NA NA NA NA
TU8A2 0 NA NA NA NA NA NA NA NA NA NA NA NA
AU10A5 3 4 0 0 1 4 0 1 5 0 3 NA NA
AU10A6 2 0 0 0 0 3 4 2 2 4 3 NA NA
AU10A3 0 0 1 3 2 3 0 5 1 2 2 1 1
AU10A2 0 1 0 0 2 0 2 1 0 NA NA NA NA
siteCovs look like this
MATemp annual_pptn12
KUBC3 19.4 2120
KUBC4 19.3 2144
KUBC5 19.3 2144
KUBC6 19.3 2144
KUBC2 19.3 2153
KUBA1 20.0 1922
obsCovs look like this
wind.1 wet.1 temp_anomaly.1 month.1 start2.1 wind.2 wet.2 temp_anomaly.2 month.2 start2.2 wind.3 wet.3
KUBC3 0 1 1.6 3 8.10 NA NA NA NA NA NA NA
KUBC4 0 1 -1.3 10 7.13 1 1 -0.3 3 6.40 NA NA
KUBC5 0 1 1.2 3 7.35 NA NA NA NA NA NA NA
KUBC6 2 1 0.2 3 8.30 NA NA NA NA NA NA NA
KUBC2 0 1 1.7 3 7.25 2 1 -1.3 3 6.27 NA NA
KUBA1 2 1 1.0 10 8.25 NA NA NA NA NA NA NA
At the moment I am focussing on getting some reasonable models fitted incorporating covariates of detection and abundance, so ignoring the temporal component for now. I am able to return some good fits with a very simple quadratic term for temperature, looking like this (see above, for some reason it wants to display there...).
Based on AIC, this model performs better than one without covariates, without a quadratic terms, or with alternative error distributions...
nPars AIC delta AICwt cumltvWt
lam(I(MATemp^2))p(.)NB 4 3138.53 0.00 1.0e+00 1.00
lam(MATemp)p(.)NB 4 3163.51 24.98 3.8e-06 1.00
lam(MATemp)p(.)ZIP 4 3213.17 74.65 6.2e-17 1.00
lam(I(MATemp^2))p(.)P 3 3368.83 230.30 9.8e-51 1.00
lam(MATemp)p(.)P 3 3504.11 365.58 4.1e-80 1.00
lam(.)p(.)P 2 3633.74 495.22 2.9e-108 1.00
and has reasonable bootstrap support,
Call: parboot(object = fm6, statistic = fitstats, nsim = 100, report = 1)
Parametric Bootstrap Statistics:
t0 mean(t0 - t_B) StdDev(t0 - t_B) Pr(t_B > t0)
SSE 3892 -1214 1524.3 0.7624
Chisq 8212 2950 1126.6 0.0297
freemanTukey 807 -70 96.5 0.7030
But.... it returns max abundance estimates higher than 30, nearly three times the maximum recorded count (~12 individuals) (when plotted as above on the original scale). From a search of these pages and others, this could result from very low detectability estimates, and indeed, I have a lot of zeros (though this model also outperformed its ZIP equivalents) but when I try to include observation covariates to absorb some of this variation, e.g. including a covariate for start time of the survey:
> (fm8 <- pcount(~start ~I(MATemp^2), spp.umf, starts = c(1,0,0,0,0), K = 100,mixture = "NB"))
I cannot get past this error message
Error in optim(starts, nll, method = method, hessian = se, ...) :
initial value in 'vmmin' is not finite
I have tried rescaling this covariate, but as it is catergorical this was possibly not even appropriate. I have five other covariates, month, wind, wet, and even temperature anomaly (the deviation of the survey from the mean temp at a site), each rescaled and raw, to no avail. there is wide variation in the number of visits to my sites, ranging from only 3 visits to 18, by when I restrict occaisions to 3 max, I get the same error. I have also tried tweaking starting values, but I am not sure I know how to choose reasonable ones in this case... Is pcount even appropriate n the case of data like these?
Any thoughts would be much appreciated!
regards,
Alex
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
--
Hi Alex,
I don't have time to dig through your script right now, but you should be aware that the predicted abundance curve should not "fit" the observed data points unless p=1. However, if you think pcount has changed in some way in recent versions, I would appreciate it if you could send me some evidence of that. The function gets tested before each release and I haven't noticed any problems.
Richard
On Sun, Sep 8, 2013 at 9:09 AM, Alex Anderson <alejandro....@gmail.com> wrote:
Hi All,
Until recently I have been getting good results fitting models in pcount open to bird survey data.� I then relaxed some of my data filtering to allow more variation in survey conditions, thinking that this may help to the improve fit of the detectability component of my models.� Until this point I was getting good quadratic fits to my data (see plots below), (and able to predict from ZIP fits using AICcmodavg).� Now I seem to be getting very flat or sometime even concave fits to the same data!
I have gone back to my original data now to test, and the results are similar.Adding many observation covariates (overfitting essentially) makes little difference, as does changing the family (P, NB) Have I missed something?� Attached is a zipped archive with script and files necessary to recreate the models and plot.� Any thoughts would be much appreciated.
best regards
Alex
On 9/08/13 1:34 PM, Richard Chandler wrote:
Hi Marc, what versions of unmarked and RcppArmadillo are you using?�
Regarding the delta method for the ZIP model, I'm afraid I don't have time to write the code right now, but it would be great if someone wanted to contribute code.�
Richard
On Fri, Aug 9, 2013 at 4:21 AM, Kery Marc <marc...@vogelwarte.ch> wrote:
Question to Alex: so you no longer got the infinite initial in vmmin error when fitting the model after you updated unmarked ? I just did and I still get it ....
�
�
�
Von: unma...@googlegroups.com [mailto:unma...@googlegroups.com] Im Auftrag von Kery Marc
Gesendet: Freitag, 9. August 2013 10:17
An: 'unma...@googlegroups.com'
Betreff: AW: [unmarked] pcount
�
Dear Alex,
�
re. CIs for a ZIP fit of the Nmix, you can use a predict function in Marc Mazerolle�s AICcmodavg package (pointed out to me by Andy Royle).
�
Regards� --� Marc
�
�
�
Von: unma...@googlegroups.com [mailto:unma...@googlegroups.com] Im Auftrag von Alex Anderson
Gesendet: Freitag, 9. August 2013 09:37
An: unma...@googlegroups.com
Betreff: Re: [unmarked] pcount
�
Dear all,
Thanks so much for sharing helpful comments.� Following Richards suggestions, I've used the R engine and succeeded in having pcount fit some Nmix models with obscovs to my bird survey data from Australian montane rainforests.� Today, after a package update, I am even having success with the C engine (Thanks to Richard?).� As Marc and others have pointed out before, it is possible to get models with a good AIC performance and GOF that nonetheless are way above the estimates one would expect from the data.� I have a set of models in which a ZIP model with no detection covariates out-performs NB, but only just.
���������������������������� nPars���� AIC� delta��� AICwt cumltvWt
fm7:~1~T+T^2+Pptn,ZIP������������ 6 2909.32�� 0.00� 4.5e-01���� 0.45
fm13~ta~T+T^2,NB����������������� 6 2910.24�� 0.92� 2.8e-01���� 0.74
fm20:~Wt+ta~T+T^2,ZIP������������ 8 2911.83�� 2.50� 1.3e-01���� 0.87
fm22:~Wt+Sn+ta~T+T^2+Pptn,ZIP���� 8 2911.83�� 2.50� 1.3e-01���� 0.99
fm15:~Sn~T+T^2,NB���������������� 6 2920.72� 11.40� 1.5e-03���� 1.00
fm5:~1~T+T^2,NB������������������ 5 2920.86� 11.54� 1.4e-03���� 1.00
fm11~Wn~T+T^2,NB����������������� 6 2921.48� 12.16� 1.0e-03���� 1.00
fm18:~St~T+T^2,ZIP��������������� 6 2921.53� 12.21� 1.0e-03���� 1.00
fm9:~Wt~T+T^2,NB����������������� 6 2921.95� 12.63� 8.2e-04���� 1.00
fm8:~1~T+T^2+Pptn,NB������������� 6 2948.27� 38.94� 1.6e-09���� 1.00
fm14:~ta~T+T^2,ZIP��������������� 6 2961.14� 51.81� 2.5e-12���� 1.00
fm21:~Wt+Sn+ta~T+T^2+Pptn,NB����� 8 2962.04� 52.71� 1.6e-12���� 1.00
fm6:~1~T+T^2,ZIP����������������� 5 2969.38� 60.06� 4.1e-14���� 1.00
fm12:~Wn~T+T^2,ZIP��������������� 6 2969.71� 60.39� 3.5e-14���� 1.00
fm19:~Wt+ta~T+T^2,NB������������� 6 2970.11� 60.79� 2.9e-14���� 1.00
fm16:~Sn~T+T^2,ZIP��������������� 6 2970.35� 61.02� 2.5e-14���� 1.00
fm17:~St~T+T^2,NB���������������� 6 2970.35� 61.02� 2.5e-14���� 1.00
fm10:~Wt~T+T^2,ZIP��������������� 6 2971.24� 61.91� 1.6e-14���� 1.00
fm3:~1~T,NB���������������������� 4 3134.72 225.40� 5.1e-50���� 1.00
fm4:~1~T,ZIP��������������������� 4 3170.93 261.61� 7.0e-58���� 1.00
fm2:~1~T,P����������������������� 3 3464.14 554.82 1.5e-121���� 1.00
fm1:~1~1,P����������������������� 2 3591.60 682.28 3.2e-149���� 1.00
(site covariates: "T" = mean annual temperature, "Pptn" =mean annual precipitation,
obscovs: "ta" = temp anomaly of the survey (= survey temp minus mean annual temperature),
"Wt" = survey wetness (= mostly canopy drip in the rainforest), "Sn" = Season, "Wn" = wind,
"St" = start time)
Model fits are good for both these top models:
#fm7:~1~T+T^2,ZIP
Call: parboot(object = fm7, statistic = fitstats, nsim = 100, report = 1)
Parametric Bootstrap Statistics:
�������������� t0 mean(t0 - t_B) StdDev(t0 - t_B) Pr(t_B > t0)
SSE��������� 3329�������� -111.1����������� 620.4������ 0.5149
Chisq������� 5819�������� 2011.9���������� 1046.9������ 0.0396
freemanTukey� 613���������� 52.2������������ 43.9������ 0.0891
#fm13~ta~T+T^2,NB
Call: parboot(object = fm13, statistic = fitstats, nsim = 100, report = 1)
Parametric Bootstrap Statistics:
�������������� t0 mean(t0 - t_B) StdDev(t0 - t_B) Pr(t_B > t0)
SSE��������� 3293�������� -529.9����������� 761.4������ 0.7426
Chisq������� 5529�������� 1690.8���������� 1268.7������ 0.0396
freemanTukey� 603���������� 13.6������������ 39.7������ 0.3465
fm20:~Wt+ta~T+T^2,ZIP
Call: parboot(object = fm20, statistic = fitstats, nsim = 100, report = 1)
Parametric Bootstrap Statistics:
�������������� t0 mean(t0 - t_B) StdDev(t0 - t_B) Pr(t_B > t0)
SSE��������� 3279�������� -473.4����������� 744.1������ 0.7525
Chisq������� 5460�������� 1770.4���������� 1058.4������ 0.0594
freemanTukey� 600���������� 19.6������������ 42.9������ 0.3168
but the estimates they give vary widely! e.g., in this case, where the observed max(count)� = 12 individuals, the maximum estimate from a NegBin (top figure) is around 50 (!), while that from ZIP with obscovs is around 13.8 (middle figure), and around 20 without obscovs (bottom figure).� detectability is reasonably good in this species, despite the habitat context� Lewins' Honeyeater, a medium sized, vocal passerine with a loud and distinctive call)
As pointed out previously by Richard, this brings me up against a current limitation in the unmarked code for predict where the prior is a ZIP function:� At the moment there are no confidence intervals calculated for a ZIP function.� Confidence intervals for zero-inflated functions (Neg Bin also) (and perhaps the quicker C++ code to run them!?) would be a fantastic addition to unmarked.. In the meantime, is there any example code out there for a recommended way to achieve this outside unmarked in the mean time?
Thanks again for your helpful comments.
regards
Alex
On 8/08/13 10:40 PM, Kery Marc wrote:
Dear Murray,
yes, that's the paper. And several people, including myself, have fitted Nmix models with Poisson or NegBin priors for abundance and got unrealistic abundance estimates from the latter, even when a traditional GOF test (e.g., based on Chisquare) indicated the model fit. (As an aside, this is a good point to remember: whether a model fits or not does not necessarily mean anything in terms of whether it is useful.)
I find this a difficult problem: to decide which mixture to adopt for N in the model. Quite often, we find that a Poisson mixture does not fit, even when we add a couple of covariates. Since traditional wisdom says we should not base our inference on a model that does not pass some GOF test, we should therefore try some other mixture distribution, e.g., the ZIP or the NegBin which are currently implenented in unmarked. Others that have been fit in the context of a Nmix model are the Poisson log-normal or a DPP (for the latter, see the 2008 Biometrics paper by Dorazio et al.). There is clearly scope for research here.
Kind regards� --� Marc
From: unma...@googlegroups.com [unma...@googlegroups.com] on behalf of Murray Efford [murray...@otago.ac.nz]
Sent: 08 August 2013 14:03
To: unma...@googlegroups.com
Subject: RE: [unmarked] pcount
Hi Marc et al
Would that be Joseph, Elkin, Martin & Possingham (2009) Modeling abundance using N-mixture models: the importance of considering ecological mechanisms. Ecol. Appl. 19:631-642?
It seems to fit. I'm curious how we deal convincingly with strong model-dependence in these cases. Perhaps we can rely on the accumulated wisdom of practitioners, but that is a little hard to justify to statisticians!
Murray
From: unma...@googlegroups.com [unma...@googlegroups.com] on behalf of Kery Marc [marc...@vogelwarte.ch]
Sent: Thursday, 8 August 2013 10:36 p.m.
To: unma...@googlegroups.com
Subject: RE: [unmarked] pcount
Dear Alejandro,
the NegBin often fits, but can produce unrealistically high estimates of N; see paper by Johnson et al sometime back in 2009 or so. I would clearly not use it in this case. What about the ZIP ? Does this produce reasonable estimates ?
Re. the error message:
Error in optim(starts, nll, method = method, hessian = se, ...) :
� initial value in 'vmmin' is not finite
Totally inexplicable to me (and to Richard Chandler as well), for about 2 months, I have had the same problem when fitting Nmix models with NAs in the data set, even with the pcount example and the mallard data set. See here:
>�� # Real data
>����� data(mallard)
>����� mallardUMF <- unmarkedFramePCount(mallard.y, siteCovs = mallard.site,
+����� obsCovs = mallard.obs)
>����� (fm.mallard <- pcount(~ ivel+ date + I(date^2) ~ length + elev + forest, mallardUMF, K=30))
Fehler in optim(starts, nll, method = method, hessian = se, ...) :
� Anfangswert in 'vmmin' ist nicht endlich
Zus�tzlich: Warnmeldung:
4 sites have been discarded because of missing data.
When you fill all NAs in the covariate data, the problem goes away. Very strange.
Kind regards� --� Marc
______________________________________________________________
�
Marc K�ry
�
Swiss Ornithological Institute | Seerose 1 | CH-6204 Sempach | Switzerland
______________________________________________________________
�
*** Introduction to Bayesian statistical modeling: K�ry (2010), Introduction to WinBUGS for Ecologists, Academic Press; see www.mbr-pwrc.usgs.gov/pubanalysis/kerybook
*** Book on Bayesian statistical modeling: K�ry & Schaub (2012), Bayesian Population Analysis using WinBUGS, Academic Press; see www.vogelwarte.ch/bpa
*** Upcoming workshops: http://www.phidot.org/forum/viewforum.php?f=8
From: unma...@googlegroups.com [unma...@googlegroups.com] on behalf of alejandro....@gmail.com [alejandro....@gmail.com]
Sent: 01 August 2013 06:39
To: unma...@googlegroups.com
Subject: [unmarked] pcount
Hi All,
I am in the process of analysing some long-term monitoring data from audio-visual counts of rainforest birds.� My data are spatially and temporally replicated, with "points" in "sites" distributed across a broad environmental gradient (elevation, proxied here for simplicity with mean annual temperature ("MATemp")) and repeated several times a year per site for about 10 years.�
my very holey count data look like this
:
�������� count.1 count.2 count.3 count.4 count.5 count.6 count.7 count.8 count.9 count.10 count.11 count.12 count.13
KUBC3��������� 0���� � 2 ���� NA����� NA����� NA����� NA����� NA����� NA����� NA������ NA������ NA������ NA������ NA
KUBC4��������� 0����� NA����� NA����� NA����� NA����� NA����� NA����� NA����� NA������ NA������ NA������ NA������ NA
KUBC5��������� 1 ���� NA����� NA����� NA����� NA����� NA����� NA����� NA����� NA������ NA������ NA������ NA������ NA
KUBC6 ������ � 0����� NA����� NA����� NA����� NA����� NA����� NA����� NA����� NA������ NA������ NA������ NA������ NA
TU8A2��������� 0����� NA����� NA����� NA����� NA����� NA����� NA����� NA����� NA������ NA������ NA������ NA������ NA
AU10A5�������� 3������ 4������ 0������ 0������ 1������ 4������ 0������ 1������ 5������� 0������� 3������ NA������ NA
AU10A6�������� 2������ 0������ 0������ 0������ 0������ 3������ 4������ 2������ 2������� 4������� 3������ NA������ NA
AU10A3�������� 0������ 0������ 1������ 3������ 2������ 3������ 0������ 5������ 1������� 2������� 2������� 1������� 1
AU10A2�������� 0������ 1������ 0������ 0������ 2������ 0������ 2������ 1������ 0������ NA������ NA������ NA������ NA
siteCovs look like this
�������� MATemp annual_pptn12
KUBC3����� 19.4��������� 2120
KUBC4����� 19.3��������� 2144
KUBC5����� 19.3��������� 2144
KUBC6����� 19.3��������� 2144
KUBC2����� 19.3��������� 2153
KUBA1����� 20.0��������� 1922
obsCovs look like this
����� wind.1 wet.1 temp_anomaly.1 month.1 start2.1 wind.2 wet.2 temp_anomaly.2 month.2 start2.2 wind.3 wet.3
KUBC3����� 0���� 1����������� 1.6������ 3���� 8.10���� NA��� NA������������ NA����� NA������ NA���� NA��� NA
KUBC4����� 0���� 1���������� -1.3����� 10���� 7.13����� 1���� 1���������� -0.3������ 3���� 6.40���� NA��� NA
KUBC5����� 0���� 1����������� 1.2������ 3���� 7.35���� NA��� NA������������ NA����� NA������ NA���� NA��� NA
KUBC6����� 2���� 1����������� 0.2������ 3���� 8.30���� NA��� NA������������ NA����� NA������ NA���� NA��� NA
KUBC2����� 0���� 1����������� 1.7������ 3���� 7.25����� 2���� 1���������� -1.3������ 3���� 6.27���� NA��� NA
KUBA1����� 2���� 1����������� 1.0����� 10���� 8.25���� NA��� NA������������ NA����� NA������ NA���� NA��� NA
At the moment I am focussing on getting some reasonable models fitted incorporating covariates of detection and abundance, so ignoring the temporal component for now.� I am able to return some good fits with a very simple quadratic term for temperature, looking like this (see above, for some reason it wants to display there...).
Based on AIC, this model performs better than one without covariates, without a quadratic terms, or with alternative error distributions...
���������������������� nPars���� AIC� delta��� AICwt cumltvWt
lam(I(MATemp^2))p(.)NB���� 4 3138.53�� 0.00� 1.0e+00���� 1.00
lam(MATemp)p(.)NB��������� 4 3163.51� 24.98� 3.8e-06���� 1.00
lam(MATemp)p(.)ZIP�������� 4 3213.17� 74.65� 6.2e-17���� 1.00
lam(I(MATemp^2))p(.)P����� 3 3368.83 230.30� 9.8e-51���� 1.00
lam(MATemp)p(.)P���������� 3 3504.11 365.58� 4.1e-80���� 1.00
lam(.)p(.)P��������������� 2 3633.74 495.22 2.9e-108���� 1.00
and has reasonable bootstrap support,
Call: parboot(object = fm6, statistic = fitstats, nsim = 100, report = 1)
Parametric Bootstrap Statistics:
�������������� t0 mean(t0 - t_B) StdDev(t0 - t_B) Pr(t_B > t0)
SSE��������� 3892��������� -1214���������� 1524.3������ 0.7624
Chisq������� 8212���������� 2950���������� 1126.6������ 0.0297
freemanTukey� 807����������� -70������������ 96.5������ 0.7030
�But.... it returns max abundance estimates higher than 30, nearly three times the maximum recorded count (~12 individuals) (when plotted as above on the original scale).� From a search of these pages and others, this could result from very low detectability estimates, and indeed, I have a lot of zeros (though this model also outperformed its ZIP equivalents) but when I try to include observation covariates to absorb some of this variation, e.g. including a covariate for start time of the survey:
> (fm8 <- pcount(~start ~I(MATemp^2), spp.umf, starts = c(1,0,0,0,0), K = 100,mixture = "NB"))
I cannot get past this error message
Error in optim(starts, nll, method = method, hessian = se, ...) :
� initial value in 'vmmin' is not finite
I have tried rescaling this covariate, but as it is catergorical this was possibly not even appropriate.� I have five other covariates, month, wind, wet, and even temperature anomaly (the deviation of the survey from the mean temp at a site), each rescaled and raw, to no avail.� there is wide variation in the number of visits to my sites, ranging from only 3 visits to 18, by when I restrict occaisions to 3 max, I get the same error.� I have also tried tweaking starting values, but I am not sure I know how to choose reasonable ones in this case...� Is pcount even appropriate n the case of data like these?
Any thoughts would be much appreciated!
regards,
Alex
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
�
�
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
�
�
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
�
��
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
--
Richard ChandlerUniversity of GeorgiaWarnell School of Forestry and Natural Resources
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
�
�
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi Alex,
I don't have time to dig through your script right now, but you should be aware that the predicted abundance curve should not "fit" the observed data points unless p=1. However, if you think pcount has changed in some way in recent versions, I would appreciate it if you could send me some evidence of that. The function gets tested before each release and I haven't noticed any problems.
Richard
On Sun, Sep 8, 2013 at 9:09 AM, Alex Anderson <alejandro....@gmail.com> wrote:
Hi All,
Until recently I have been getting good results fitting models in pcount open to bird survey data.� I then relaxed some of my data filtering to allow more variation in survey conditions, thinking that this may help to the improve fit of the detectability component of my models.� Until this point I was getting good quadratic fits to my data (see plots below), (and able to predict from ZIP fits using AICcmodavg).� Now I seem to be getting very flat or sometime even concave fits to the same data!
I have gone back to my original data now to test, and the results are similar.Adding many observation covariates (overfitting essentially) makes little difference, as does changing the family (P, NB) Have I missed something?� Attached is a zipped archive with script and files necessary to recreate the models and plot.� Any thoughts would be much appreciated.
best regards
Alex
On 9/08/13 1:34 PM, Richard Chandler wrote:
Hi Marc, what versions of unmarked and RcppArmadillo are you using?�
Regarding the delta method for the ZIP model, I'm afraid I don't have time to write the code right now, but it would be great if someone wanted to contribute code.�
Richard
On Fri, Aug 9, 2013 at 4:21 AM, Kery Marc <marc...@vogelwarte.ch> wrote:
Question to Alex: so you no longer got the infinite initial in vmmin error when fitting the model after you updated unmarked ? I just did and I still get it ....
�
�
�
Von: unma...@googlegroups.com [mailto:unma...@googlegroups.com] Im Auftrag von Kery Marc
Gesendet: Freitag, 9. August 2013 10:17
An: 'unma...@googlegroups.com'
Betreff: AW: [unmarked] pcount
�
Dear Alex,
�
re. CIs for a ZIP fit of the Nmix, you can use a predict function in Marc Mazerolle�s AICcmodavg package (pointed out to me by Andy Royle).
�
Regards� --� Marc
�
�
�
Von: unma...@googlegroups.com [mailto:unma...@googlegroups.com] Im Auftrag von Alex Anderson
Gesendet: Freitag, 9. August 2013 09:37
An: unma...@googlegroups.com
Betreff: Re: [unmarked] pcount
�
Dear all,
Thanks so much for sharing helpful comments.� Following Richards suggestions, I've used the R engine and succeeded in having pcount fit some Nmix models with obscovs to my bird survey data from Australian montane rainforests.� Today, after a package update, I am even having success with the C engine (Thanks to Richard?).� As Marc and others have pointed out before, it is possible to get models with a good AIC performance and GOF that nonetheless are way above the estimates one would expect from the data.� I have a set of models in which a ZIP model with no detection covariates out-performs NB, but only just.
���������������������������� nPars���� AIC� delta��� AICwt cumltvWt
fm7:~1~T+T^2+Pptn,ZIP������������ 6 2909.32�� 0.00� 4.5e-01���� 0.45
fm13~ta~T+T^2,NB����������������� 6 2910.24�� 0.92� 2.8e-01���� 0.74
fm20:~Wt+ta~T+T^2,ZIP������������ 8 2911.83�� 2.50� 1.3e-01���� 0.87
fm22:~Wt+Sn+ta~T+T^2+Pptn,ZIP���� 8 2911.83�� 2.50� 1.3e-01���� 0.99
fm15:~Sn~T+T^2,NB���������������� 6 2920.72� 11.40� 1.5e-03���� 1.00
fm5:~1~T+T^2,NB������������������ 5 2920.86� 11.54� 1.4e-03���� 1.00
fm11~Wn~T+T^2,NB����������������� 6 2921.48� 12.16� 1.0e-03���� 1.00
fm18:~St~T+T^2,ZIP��������������� 6 2921.53� 12.21� 1.0e-03���� 1.00
fm9:~Wt~T+T^2,NB����������������� 6 2921.95� 12.63� 8.2e-04���� 1.00
fm8:~1~T+T^2+Pptn,NB������������� 6 2948.27� 38.94� 1.6e-09���� 1.00
fm14:~ta~T+T^2,ZIP��������������� 6 2961.14� 51.81� 2.5e-12���� 1.00
fm21:~Wt+Sn+ta~T+T^2+Pptn,NB����� 8 2962.04� 52.71� 1.6e-12���� 1.00
fm6:~1~T+T^2,ZIP����������������� 5 2969.38� 60.06� 4.1e-14���� 1.00
fm12:~Wn~T+T^2,ZIP��������������� 6 2969.71� 60.39� 3.5e-14���� 1.00
fm19:~Wt+ta~T+T^2,NB������������� 6 2970.11� 60.79� 2.9e-14���� 1.00
fm16:~Sn~T+T^2,ZIP��������������� 6 2970.35� 61.02� 2.5e-14���� 1.00
fm17:~St~T+T^2,NB���������������� 6 2970.35� 61.02� 2.5e-14���� 1.00
fm10:~Wt~T+T^2,ZIP��������������� 6 2971.24� 61.91� 1.6e-14���� 1.00
fm3:~1~T,NB���������������������� 4 3134.72 225.40� 5.1e-50���� 1.00
fm4:~1~T,ZIP��������������������� 4 3170.93 261.61� 7.0e-58���� 1.00
fm2:~1~T,P����������������������� 3 3464.14 554.82 1.5e-121���� 1.00
fm1:~1~1,P����������������������� 2 3591.60 682.28 3.2e-149���� 1.00
(site covariates: "T" = mean annual temperature, "Pptn" =mean annual precipitation,
obscovs: "ta" = temp anomaly of the survey (= survey temp minus mean annual temperature),
"Wt" = survey wetness (= mostly canopy drip in the rainforest), "Sn" = Season, "Wn" = wind,
"St" = start time)
Model fits are good for both these top models:
#fm7:~1~T+T^2,ZIP
Call: parboot(object = fm7, statistic = fitstats, nsim = 100, report = 1)
Parametric Bootstrap Statistics:
�������������� t0 mean(t0 - t_B) StdDev(t0 - t_B) Pr(t_B > t0)
SSE��������� 3329�������� -111.1����������� 620.4������ 0.5149
Chisq������� 5819�������� 2011.9���������� 1046.9������ 0.0396
freemanTukey� 613���������� 52.2������������ 43.9������ 0.0891
#fm13~ta~T+T^2,NB
Call: parboot(object = fm13, statistic = fitstats, nsim = 100, report = 1)
Parametric Bootstrap Statistics:
�������������� t0 mean(t0 - t_B) StdDev(t0 - t_B) Pr(t_B > t0)
SSE��������� 3293�������� -529.9����������� 761.4������ 0.7426
Chisq������� 5529�������� 1690.8���������� 1268.7������ 0.0396
freemanTukey� 603���������� 13.6������������ 39.7������ 0.3465
fm20:~Wt+ta~T+T^2,ZIP
Call: parboot(object = fm20, statistic = fitstats, nsim = 100, report = 1)
Parametric Bootstrap Statistics:
�������������� t0 mean(t0 - t_B) StdDev(t0 - t_B) Pr(t_B > t0)
SSE��������� 3279�������� -473.4����������� 744.1������ 0.7525
Chisq������� 5460�������� 1770.4���������� 1058.4������ 0.0594
freemanTukey� 600���������� 19.6������������ 42.9������ 0.3168
but the estimates they give vary widely! e.g., in this case, where the observed max(count)� = 12 individuals, the maximum estimate from a NegBin (top figure) is around 50 (!), while that from ZIP with obscovs is around 13.8 (middle figure), and around 20 without obscovs (bottom figure).� detectability is reasonably good in this species, despite the habitat context� Lewins' Honeyeater, a medium sized, vocal passerine with a loud and distinctive call)
As pointed out previously by Richard, this brings me up against a current limitation in the unmarked code for predict where the prior is a ZIP function:� At the moment there are no confidence intervals calculated for a ZIP function.� Confidence intervals for zero-inflated functions (Neg Bin also) (and perhaps the quicker C++ code to run them!?) would be a fantastic addition to unmarked.. In the meantime, is there any example code out there for a recommended way to achieve this outside unmarked in the mean time?
Thanks again for your helpful comments.
regards
Alex
On 8/08/13 10:40 PM, Kery Marc wrote:
Dear Murray,
yes, that's the paper. And several people, including myself, have fitted Nmix models with Poisson or NegBin priors for abundance and got unrealistic abundance estimates from the latter, even when a traditional GOF test (e.g., based on Chisquare) indicated the model fit. (As an aside, this is a good point to remember: whether a model fits or not does not necessarily mean anything in terms of whether it is useful.)
I find this a difficult problem: to decide which mixture to adopt for N in the model. Quite often, we find that a Poisson mixture does not fit, even when we add a couple of covariates. Since traditional wisdom says we should not base our inference on a model that does not pass some GOF test, we should therefore try some other mixture distribution, e.g., the ZIP or the NegBin which are currently implenented in unmarked. Others that have been fit in the context of a Nmix model are the Poisson log-normal or a DPP (for the latter, see the 2008 Biometrics paper by Dorazio et al.). There is clearly scope for research here.
Kind regards� --� Marc
From: unma...@googlegroups.com [unma...@googlegroups.com] on behalf of Murray Efford [murray...@otago.ac.nz]
Sent: 08 August 2013 14:03
To: unma...@googlegroups.com
Subject: RE: [unmarked] pcount
Hi Marc et al
Would that be Joseph, Elkin, Martin & Possingham (2009) Modeling abundance using N-mixture models: the importance of considering ecological mechanisms. Ecol. Appl. 19:631-642?
It seems to fit. I'm curious how we deal convincingly with strong model-dependence in these cases. Perhaps we can rely on the accumulated wisdom of practitioners, but that is a little hard to justify to statisticians!
Murray
From: unma...@googlegroups.com [unma...@googlegroups.com] on behalf of Kery Marc [marc...@vogelwarte.ch]
Sent: Thursday, 8 August 2013 10:36 p.m.
To: unma...@googlegroups.com
Subject: RE: [unmarked] pcount
Dear Alejandro,
the NegBin often fits, but can produce unrealistically high estimates of N; see paper by Johnson et al sometime back in 2009 or so. I would clearly not use it in this case. What about the ZIP ? Does this produce reasonable estimates ?
Re. the error message:
Error in optim(starts, nll, method = method, hessian = se, ...) :
� initial value in 'vmmin' is not finite
Totally inexplicable to me (and to Richard Chandler as well), for about 2 months, I have had the same problem when fitting Nmix models with NAs in the data set, even with the pcount example and the mallard data set. See here:
>�� # Real data
>����� data(mallard)
>����� mallardUMF <- unmarkedFramePCount(mallard.y, siteCovs = mallard.site,
+����� obsCovs = mallard.obs)
>����� (fm.mallard <- pcount(~ ivel+ date + I(date^2) ~ length + elev + forest, mallardUMF, K=30))
Fehler in optim(starts, nll, method = method, hessian = se, ...) :
� Anfangswert in 'vmmin' ist nicht endlich
Zus�tzlich: Warnmeldung:
4 sites have been discarded because of missing data.
When you fill all NAs in the covariate data, the problem goes away. Very strange.
Kind regards� --� Marc
______________________________________________________________
�
Marc K�ry
�
Swiss Ornithological Institute | Seerose 1 | CH-6204 Sempach | Switzerland
______________________________________________________________
�
*** Introduction to Bayesian statistical modeling: K�ry (2010), Introduction to WinBUGS for Ecologists, Academic Press; see www.mbr-pwrc.usgs.gov/pubanalysis/kerybook
*** Book on Bayesian statistical modeling: K�ry & Schaub (2012), Bayesian Population Analysis using WinBUGS, Academic Press; see www.vogelwarte.ch/bpa
*** Upcoming workshops: http://www.phidot.org/forum/viewforum.php?f=8
From: unma...@googlegroups.com [unma...@googlegroups.com] on behalf of alejandro....@gmail.com [alejandro....@gmail.com]
Sent: 01 August 2013 06:39
To: unma...@googlegroups.com
Subject: [unmarked] pcount
Hi All,
I am in the process of analysing some long-term monitoring data from audio-visual counts of rainforest birds.� My data are spatially and temporally replicated, with "points" in "sites" distributed across a broad environmental gradient (elevation, proxied here for simplicity with mean annual temperature ("MATemp")) and repeated several times a year per site for about 10 years.�
my very holey count data look like this
:
�������� count.1 count.2 count.3 count.4 count.5 count.6 count.7 count.8 count.9 count.10 count.11 count.12 count.13
KUBC3��������� 0���� � 2 ���� NA����� NA����� NA����� NA����� NA����� NA����� NA������ NA������ NA������ NA������ NA
KUBC4��������� 0����� NA����� NA����� NA����� NA����� NA����� NA����� NA����� NA������ NA������ NA������ NA������ NA
KUBC5��������� 1 ���� NA����� NA����� NA����� NA����� NA����� NA����� NA����� NA������ NA������ NA������ NA������ NA
KUBC6 ������ � 0����� NA����� NA����� NA����� NA����� NA����� NA����� NA����� NA������ NA������ NA������ NA������ NA
TU8A2��������� 0����� NA����� NA����� NA����� NA����� NA����� NA����� NA����� NA������ NA������ NA������ NA������ NA
AU10A5�������� 3������ 4������ 0������ 0������ 1������ 4������ 0������ 1������ 5������� 0������� 3������ NA������ NA
AU10A6�������� 2������ 0������ 0������ 0������ 0������ 3������ 4������ 2������ 2������� 4������� 3������ NA������ NA
AU10A3�������� 0������ 0������ 1������ 3������ 2������ 3������ 0������ 5������ 1������� 2������� 2������� 1������� 1
AU10A2�������� 0������ 1������ 0������ 0������ 2������ 0������ 2������ 1������ 0������ NA������ NA������ NA������ NA
siteCovs look like this
�������� MATemp annual_pptn12
KUBC3����� 19.4��������� 2120
KUBC4����� 19.3��������� 2144
KUBC5����� 19.3��������� 2144
KUBC6����� 19.3��������� 2144
KUBC2����� 19.3��������� 2153
KUBA1����� 20.0��������� 1922
obsCovs look like this
����� wind.1 wet.1 temp_anomaly.1 month.1 start2.1 wind.2 wet.2 temp_anomaly.2 month.2 start2.2 wind.3 wet.3
KUBC3����� 0���� 1����������� 1.6������ 3���� 8.10���� NA��� NA������������ NA����� NA������ NA���� NA��� NA
KUBC4����� 0���� 1���������� -1.3����� 10���� 7.13����� 1���� 1���������� -0.3������ 3���� 6.40���� NA��� NA
KUBC5����� 0���� 1����������� 1.2������ 3���� 7.35���� NA��� NA������������ NA����� NA������ NA���� NA��� NA
KUBC6����� 2���� 1����������� 0.2������ 3���� 8.30���� NA��� NA������������ NA����� NA������ NA���� NA��� NA
KUBC2����� 0���� 1����������� 1.7������ 3���� 7.25����� 2���� 1���������� -1.3������ 3���� 6.27���� NA��� NA
KUBA1����� 2���� 1����������� 1.0����� 10���� 8.25���� NA��� NA������������ NA����� NA������ NA���� NA��� NA
At the moment I am focussing on getting some reasonable models fitted incorporating covariates of detection and abundance, so ignoring the temporal component for now.� I am able to return some good fits with a very simple quadratic term for temperature, looking like this (see above, for some reason it wants to display there...).
Based on AIC, this model performs better than one without covariates, without a quadratic terms, or with alternative error distributions...
���������������������� nPars���� AIC� delta��� AICwt cumltvWt
lam(I(MATemp^2))p(.)NB���� 4 3138.53�� 0.00� 1.0e+00���� 1.00
lam(MATemp)p(.)NB��������� 4 3163.51� 24.98� 3.8e-06���� 1.00
lam(MATemp)p(.)ZIP�������� 4 3213.17� 74.65� 6.2e-17���� 1.00
lam(I(MATemp^2))p(.)P����� 3 3368.83 230.30� 9.8e-51���� 1.00
lam(MATemp)p(.)P���������� 3 3504.11 365.58� 4.1e-80���� 1.00
lam(.)p(.)P��������������� 2 3633.74 495.22 2.9e-108���� 1.00
and has reasonable bootstrap support,
Call: parboot(object = fm6, statistic = fitstats, nsim = 100, report = 1)
Parametric Bootstrap Statistics:
�������������� t0 mean(t0 - t_B) StdDev(t0 - t_B) Pr(t_B > t0)
SSE��������� 3892��������� -1214���������� 1524.3������ 0.7624
Chisq������� 8212���������� 2950���������� 1126.6������ 0.0297
freemanTukey� 807����������� -70������������ 96.5������ 0.7030
�But.... it returns max abundance estimates higher than 30, nearly three times the maximum recorded count (~12 individuals) (when plotted as above on the original scale).� From a search of these pages and others, this could result from very low detectability estimates, and indeed, I have a lot of zeros (though this model also outperformed its ZIP equivalents) but when I try to include observation covariates to absorb some of this variation, e.g. including a covariate for start time of the survey:
> (fm8 <- pcount(~start ~I(MATemp^2), spp.umf, starts = c(1,0,0,0,0), K = 100,mixture = "NB"))
I cannot get past this error message
Error in optim(starts, nll, method = method, hessian = se, ...) :
� initial value in 'vmmin' is not finite
I have tried rescaling this covariate, but as it is catergorical this was possibly not even appropriate.� I have five other covariates, month, wind, wet, and even temperature anomaly (the deviation of the survey from the mean temp at a site), each rescaled and raw, to no avail.� there is wide variation in the number of visits to my sites, ranging from only 3 visits to 18, by when I restrict occaisions to 3 max, I get the same error.� I have also tried tweaking starting values, but I am not sure I know how to choose reasonable ones in this case...� Is pcount even appropriate n the case of data like these?
Any thoughts would be much appreciated!
regards,
Alex
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
�
�
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
�
�
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
�
��
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
--
Richard ChandlerUniversity of GeorgiaWarnell School of Forestry and Natural Resources
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
�
�
--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.