Function predict in unmarked

4,082 views
Skip to first unread message

Manuel Spínola

unread,
Oct 25, 2012, 12:26:09 PM10/25/12
to unma...@googlegroups.com
Dear list members,

I want to calculate confidence interval to plot occupancy against a specific covariate included in my best model but I get the same results with the argument: interval = "confidence" and "prediction" options.

How can I get confidence intervals instead of prediction interval with function predict?

Best,

Manuel


--
Manuel Spínola, Ph.D.
Instituto Internacional en Conservación y Manejo de Vida Silvestre
Universidad Nacional
Apartado 1350-3000
Heredia
COSTA RICA
mspi...@una.ac.cr
mspin...@gmail.com
Teléfono: (506) 2277-3598
Fax: (506) 2237-7036
Personal website: Lobito de río
Institutional website: ICOMVIS

Richard Chandler

unread,
Oct 25, 2012, 4:55:35 PM10/25/12
to unma...@googlegroups.com
Hi Manuel,

This is perhaps slightly confusing, but predict returns confidence intervals, not prediction intervals. If you want to compute other types of intervals, I would recommend using parboot() or nonparboot(). 

Richard

Manuel Spínola

unread,
Oct 25, 2012, 5:31:14 PM10/25/12
to unma...@googlegroups.com
Thank you very much Richard.

Yes, is confusing for me because I am not a statistician.  I used predict (see below) and the confidence interval was to wide to plot (see the last rows), however I did also what you did in page 21 in the "Dynamic occupancy models in unmarked" vignette, and calculated the 95% CI using Predicted - 1.96*SE and Predicted + 1.96*SE and i got the "li" and "ls" values that I am showing below.  Those are completely different values, which should I use?

Aslo, could I use parboot() or nonparboot() to make a plot for a model with a covariate (continuous variable), a plot like you did in page 21?
 
> round(p, 3)
    Predicted    SE lower upper NUMARB TRONCOS    li    ls
1       0.493 0.214 0.154 0.839  0.190    0.12 0.074 0.912
2       0.518 0.198 0.185 0.835  0.286    0.12 0.130 0.905
3       0.543 0.181 0.221 0.833  0.382    0.12 0.187 0.898
4       0.567 0.166 0.259 0.831  0.477    0.12 0.243 0.892
5       0.592 0.151 0.300 0.831  0.573    0.12 0.297 0.887
6       0.616 0.137 0.340 0.833  0.669    0.12 0.348 0.884
7       0.639 0.125 0.380 0.836  0.765    0.12 0.395 0.884
8       0.662 0.115 0.417 0.843  0.860    0.12 0.437 0.887
9       0.684 0.108 0.449 0.852  0.956    0.12 0.473 0.895
10      0.705 0.103 0.476 0.863  1.052    0.12 0.504 0.907
11      0.726 0.100 0.497 0.876  1.148    0.12 0.529 0.922
12      0.745 0.100 0.511 0.891  1.243    0.12 0.550 0.940
13      0.764 0.100 0.521 0.906  1.339    0.12 0.567 0.961
14      0.781 0.102 0.526 0.920  1.435    0.12 0.581 0.981
15      0.798 0.104 0.527 0.933  1.531    0.12 0.594 1.000
16      0.814 0.106 0.526 0.945  1.626    0.12 0.606 1.000
17      0.828 0.108 0.522 0.955  1.722    0.12 0.617 1.000
18      0.842 0.109 0.517 0.964  1.818    0.12 0.629 1.000
19      0.855 0.110 0.510 0.971  1.914    0.12 0.640 1.000
20      0.867 0.110 0.502 0.977  2.009    0.12 0.652 1.000
21      0.878 0.109 0.494 0.982  2.105    0.12 0.664 1.000
22      0.888 0.108 0.484 0.985  2.201    0.12 0.677 1.000
23      0.898 0.106 0.474 0.989  2.297    0.12 0.689 1.000
24      0.907 0.104 0.464 0.991  2.392    0.12 0.702 1.000
25      0.915 0.102 0.453 0.993  2.488    0.12 0.716 1.000
26      0.922 0.099 0.442 0.994  2.584    0.12 0.729 1.000
27      0.929 0.096 0.431 0.996  2.680    0.12 0.742 1.000
28      0.936 0.092 0.419 0.997  2.775    0.12 0.755 1.000
29      0.941 0.089 0.408 0.997  2.871    0.12 0.768 1.000
30      0.947 0.085 0.396 0.998  2.967    0.12 0.780 1.000
31      0.952 0.081 0.384 0.998  3.063    0.12 0.793 1.000
32      0.956 0.077 0.372 0.999  3.158    0.12 0.804 1.000
33      0.960 0.073 0.361 0.999  3.254    0.12 0.816 1.000
34      0.964 0.070 0.349 0.999  3.350    0.12 0.827 1.000
35      0.967 0.066 0.338 0.999  3.446    0.12 0.838 1.000
36      0.970 0.062 0.326 1.000  3.542    0.12 0.848 1.000
37      0.973 0.059 0.315 1.000  3.637    0.12 0.858 1.000
38      0.975 0.055 0.304 1.000  3.733    0.12 0.867 1.000
39      0.978 0.052 0.293 1.000  3.829    0.12 0.876 1.000
40      0.980 0.049 0.282 1.000  3.925    0.12 0.884 1.000
41      0.982 0.046 0.272 1.000  4.020    0.12 0.892 1.000
42      0.983 0.043 0.261 1.000  4.116    0.12 0.900 1.000
43      0.985 0.040 0.251 1.000  4.212    0.12 0.907 1.000
44      0.986 0.037 0.241 1.000  4.308    0.12 0.913 1.000
45      0.988 0.035 0.232 1.000  4.403    0.12 0.920 1.000
46      0.989 0.032 0.222 1.000  4.499    0.12 0.925 1.000
47      0.990 0.030 0.213 1.000  4.595    0.12 0.931 1.000
48      0.991 0.028 0.204 1.000  4.691    0.12 0.936 1.000
49      0.992 0.026 0.196 1.000  4.786    0.12 0.941 1.000
50      0.992 0.024 0.187 1.000  4.882    0.12 0.945 1.000
51      0.993 0.022 0.179 1.000  4.978    0.12 0.949 1.000
52      0.994 0.021 0.172 1.000  5.074    0.12 0.953 1.000
53      0.994 0.019 0.164 1.000  5.169    0.12 0.957 1.000
54      0.995 0.018 0.157 1.000  5.265    0.12 0.960 1.000
55      0.995 0.017 0.150 1.000  5.361    0.12 0.963 1.000
56      0.996 0.015 0.143 1.000  5.457    0.12 0.966 1.000
57      0.996 0.014 0.136 1.000  5.552    0.12 0.969 1.000
58      0.997 0.013 0.130 1.000  5.648    0.12 0.971 1.000
59      0.997 0.012 0.124 1.000  5.744    0.12 0.973 1.000
60      0.997 0.011 0.118 1.000  5.840    0.12 0.975 1.000
61      0.997 0.010 0.113 1.000  5.935    0.12 0.977 1.000
62      0.998 0.009 0.107 1.000  6.031    0.12 0.979 1.000
63      0.998 0.009 0.102 1.000  6.127    0.12 0.981 1.000
64      0.998 0.008 0.097 1.000  6.223    0.12 0.982 1.000
65      0.998 0.007 0.093 1.000  6.318    0.12 0.984 1.000
66      0.998 0.007 0.088 1.000  6.414    0.12 0.985 1.000
67      0.999 0.006 0.084 1.000  6.510    0.12 0.986 1.000
68      0.999 0.006 0.080 1.000  6.606    0.12 0.987 1.000
69      0.999 0.005 0.076 1.000  6.702    0.12 0.988 1.000
70      0.999 0.005 0.072 1.000  6.797    0.12 0.989 1.000
71      0.999 0.005 0.068 1.000  6.893    0.12 0.990 1.000
72      0.999 0.004 0.065 1.000  6.989    0.12 0.991 1.000
73      0.999 0.004 0.062 1.000  7.085    0.12 0.992 1.000
74      0.999 0.004 0.059 1.000  7.180    0.12 0.992 1.000
75      0.999 0.003 0.056 1.000  7.276    0.12 0.993 1.000
76      0.999 0.003 0.053 1.000  7.372    0.12 0.994 1.000
77      0.999 0.003 0.050 1.000  7.468    0.12 0.994 1.000
78      1.000 0.002 0.048 1.000  7.563    0.12 0.995 1.000
79      1.000 0.002 0.045 1.000  7.659    0.12 0.995 1.000
80      1.000 0.002 0.043 1.000  7.755    0.12 0.996 1.000
81      1.000 0.002 0.041 1.000  7.851    0.12 0.996 1.000
82      1.000 0.002 0.038 1.000  7.946    0.12 0.996 1.000
83      1.000 0.002 0.036 1.000  8.042    0.12 0.997 1.000
84      1.000 0.001 0.035 1.000  8.138    0.12 0.997 1.000
85      1.000 0.001 0.033 1.000  8.234    0.12 0.997 1.000
86      1.000 0.001 0.031 1.000  8.329    0.12 0.997 1.000
87      1.000 0.001 0.029 1.000  8.425    0.12 0.998 1.000
88      1.000 0.001 0.028 1.000  8.521    0.12 0.998 1.000
89      1.000 0.001 0.026 1.000  8.617    0.12 0.998 1.000
90      1.000 0.001 0.025 1.000  8.712    0.12 0.998 1.000
91      1.000 0.001 0.024 1.000  8.808    0.12 0.998 1.000
92      1.000 0.001 0.023 1.000  8.904    0.12 0.998 1.000
93      1.000 0.001 0.021 1.000  9.000    0.12 0.999 1.000
94      1.000 0.001 0.020 1.000  9.095    0.12 0.999 1.000
95      1.000 0.001 0.019 1.000  9.191    0.12 0.999 1.000
96      1.000 0.001 0.018 1.000  9.287    0.12 0.999 1.000
97      1.000 0.000 0.017 1.000  9.383    0.12 0.999 1.000
98      1.000 0.000 0.016 1.000  9.478    0.12 0.999 1.000
99      1.000 0.000 0.015 1.000  9.574    0.12 0.999 1.000
100     1.000 0.000 0.015 1.000  9.670    0.12 0.999 1.000



100 1.00000002012/10/25 Richard Chandler <richard....@gmail.com>

Richard Chandler

unread,
Oct 26, 2012, 3:27:42 PM10/26/12
to unma...@googlegroups.com
Hi Manuel,

I wouldn't use Pred+/-1.96*SE. I don't know why I did that in the vignette. 

Either parboot or nonparboot can be used to obtain confidence intervals for predictions. 

Richard

Sarah Thompson

unread,
Oct 29, 2012, 10:32:45 AM10/29/12
to unma...@googlegroups.com
Just to clarify...

Are you saying that the CIs generated by predict() are not appropriate? -- both the "upper" & "lower" values it generates as well as taking 1.96*SE?

predict() is considerably faster than parboot, so it sure would be nice if these intervals were reasonably accurate...

Sarah

Richard Chandler

unread,
Oct 29, 2012, 10:49:11 AM10/29/12
to unma...@googlegroups.com
No! predict works perfectly fine. I was just trying to say that one *should* use predict instead of doing the computations manually. parboot and nonparboot are alternatives, but there is no need to use them if you are content with the standard confidence intervals. Sorry for the confusion.

Richard

Manuel Spínola

unread,
Nov 16, 2012, 1:02:16 PM11/16/12
to unma...@googlegroups.com
Thank you Richard,

Is there any example on how to use parboot or nonparboot to calculate CI to make a plot with a covariate (numerical variable) vs. psi?

Best,

Manuel 


2012/10/29 Richard Chandler <richard....@gmail.com>

Richard Chandler

unread,
Nov 19, 2012, 10:20:33 AM11/19/12
to Unmarked package
Hi Manuel,

I don't have any examples to show you. Has anyone out there done something similar? This would just require writing a function that returns predicted values (using predict), and then computing the CIs as quantiles of the bootstrap distribution. If I find time, I will add an example to the help pages for nonparboot and parboot.

Richard

Richard Chandler

unread,
Nov 24, 2012, 2:29:51 PM11/24/12
to Unmarked package
Hi Rylee,
 
After thinking about this some more, I don't think the examples shown in the help files are the correct way of computing CIs for X="the number of sites occupied in the sample." I think the best way to do this would be to compute the posterior distribution of X using Empirical Bayes. The code below seems to do the job, but I would really appreciate it if someone could test it out, and tell me if I made any errors. If this is correct, then I will clean it up and add it to unmarked... and remove the incorrect examples. NOTE: there is nothing wrong with ranef() and bup() -- this is just a question of what is the best way to estimate X and CIs for X.
 
Richard
 
 
sitesOccupied <- function(psi.c, cuts=c(0, 1)) {
    N <- length(psi.c)          # number of sites
    n <- sum(psi.c >= cuts[2])   # number of sites known to be occupied
    n0max <- sum((cuts[1] <= psi.c) & (psi.c < cuts[2]))          # number of sites with no detections
    psi.c0 <- psi.c[psi.c < 1]  # occupancy prob for sites with no dets
    postprobs <- rep(0, N+1)    # Posterior probabilities
    names(postprobs) <- 0:N
    for(n0 in 0:n0max) {        # Loop through possible values of n0
        cat("doing", n0+1, "of", n0max+1, "\n")
        ind <- combn(n0max, n0) # all possible locations for the n0 sites
        hold <- rep(NA, ncol(ind))
        for(j in 1:ncol(ind)) {
            z <- rep(0, n0max)  # True occupancy state
            z[ind[,j]] <- 1     # Creating all possibilities
            hold[j] <- prod(dbinom(z, 1, psi.c0))
        }
        postprobs[n+n0+1] <- 1 - prod(1 - hold)
    }
    return(postprobs / sum(postprobs))
}
 
 
# Example
#psi.c <- bup(ranef(fm)) # conditional estimates of occurrence probability
psi.c <- c(1, 0.5, 0.1, 1, 1, 0.2, 0.4, 0.5, 0.5, 0.1, 0.001, 0.8)
nsites <- length(psi.c)
X <- sitesOccupied(psi.c)
plot(0:nsites, X, type="h",
     xlab="Sites occupied", ylab="Posterior probability")
 
q <- cumsum(X)
results <- c(Est. = as.integer(names(which.max(X))),
             lowerCI = as.integer(names(q)[max(which(q < 0.025))]),
             upperCI = as.integer(names(q)[min(which(q > 0.975))]))
results

NOTE: if n0max is much greater than 20, this will take a long time to compute, or it may not be possible. One cheap trick to avoid this is to use the "cuts" argument to indicate which values of "psi.c" are effectively 0 or 1. Hopefully the code is self-explanatory.
 
 
 
 
 
 
 


On Fri, Nov 23, 2012 at 6:00 PM, Rylee <ryl...@gmail.com> wrote:
Hi there,

I am doing a really simple single season occupancy model. I have 51 sites, 5 observations for each site (except one that only has 4 observations). I fit a bunch of models using all combinations of 10 site covariates for occupancy probability. Now I would like to calculate the overall occupancy probability for these 51 sites and calculate a 95% confidence interval. I have tried using ranef/BUP with this code using the top model from AIC selection (but not a clear winner).

re <- ranef(d23)
EBUP <- bup(re, stat="mean")
CI <- confint(re, level=0.95)
rbind(PAO = c(Estimate = sum(EBUP), colSums(CI)) / 51)

and get this output:

     Estimate      2.5%     97.5%
PAO 0.5893818 0.5882353 0.6078431

when I try the next model:

> re <- ranef(d24)
> EBUP <- bup(re, stat="mean")
> CI <- confint(re, level=0.95)
> rbind(PAO = c(Estimate = sum(EBUP), colSums(CI)) / 51)
     Estimate      2.5%     97.5%
PAO 0.5899122 0.5882353 0.6078431

The estimates are slightly different, but the CI is exactly the same.

When I try with the global model I get an estimate of 0.5896143, again, slightly different, but the confidence interval is the exact same.
Then, if I try doing all of this again but at a 90% confidence, I get whacky CI's that dont even include the estimate and again, are all the same no matter what model I use.

> re <- ranef(d23)
> EBUP <- bup(re, stat="mean")
> CI <- confint(re, level=0.9)
> rbind(PAO = c(Estimate = sum(EBUP), colSums(CI)) / 51)
     Estimate        5%       95%
PAO 0.5893818 0.5882353 0.5882353

My question is:

Am I doing something wrong first of all?
If not, and there is possibly a bug in the code, is there a way to calculate overall occupancy probability for my 51 sites using either parboot or averaging of the output from predict?

Thanks,
Rylee




Rylee

unread,
Nov 26, 2012, 9:44:26 PM11/26/12
to unma...@googlegroups.com
So, just to clarify, I can either use bup(ranef(my model)) for psi.c and that would be give me output similar to PAO examples which uses binary prediction of occupancy or I can input a vector of occupancy probabilities calculated from, for example, predict, with my model?

Rylee

Richard Chandler

unread,
Nov 29, 2012, 9:08:19 PM11/29/12
to Unmarked package
Hi Rylee,

Yes, use psi.c <- bup(ranef(my model)). Do not use predict.

Richard

Anna Mödrath

unread,
Dec 5, 2014, 3:28:21 AM12/5/14
to unma...@googlegroups.com
Hey there,

I'm struggeling with some similar issues and just don't understand completely what I should do. I've run a single season model, survey ran over 45 days at 13 sites (so quite small dataset), with 2 categorical covariates (habitat, saltlick present/absent) and a continuous covariate (distance to watersource).
My script contains a for loop for all my files. 

I used a script I found (https://sites.google.com/site/asrworkshop/home/schedule/r-occupancy-1) and adjusted it to my data:

library(unmarked)

wd <- "D:/MA/Daten/Occupancy_2"
setwd(wd)

my.files <- list.files(wd, pattern = ".csv")
                                                                                #user defined function to create                                                                                   ordered table of aic wts and                                                                                      predictions
my_aic<-function(labels,aic,predictions,se_predictions) {
        delta<-aic-min(aic)
        wt<-exp(-delta*0.5)
        wt<-wt/sum(wt)
        wt
        output<-cbind(labels,aic,delta,wt,predictions,se_predictions)
        output<-data.frame(labels=labels,AIC=aic,Delta=delta,ModWt=wt,Prediction=predictions,SE_prediction=se_predictions)
        output[order(aic) , ]
    }
                                                                                #create a list of labels for the                                                                                   models
labels<-c("p(.)psi(.)","p(.)psi(salt)","p(.)psi(distance.water)","p(.)psi(open.shrub)",
"p(.)psi(closed.shrub)","p(.)psi(watercourse)","p(.)psi(tree.shrub)")
results.aic <- c()
results.predict <-c()
results.psi <- c()

for (i in 1:length(my.files)) {
      my.data <- read.csv(my.files[i], h=T, row.names=1) 
      
                                                                                #detection data rows are sites                                                                                     columns are detection replicates
      y<-my.data[,1:45]
      n<-nrow(my.data)
                                                                                #site level (individual)                                                                                           covariates
      Cov.site<-my.data[,46:51]
                                                                                #put everything together in                                                                                       unmarked data frame
      alles <- unmarkedFrameOccu(y = y, siteCovs = Cov.site)
                                                                                #summary of unmarked data frame
      summary(alles)
      fm1<-occu(~1 ~1,alles)
      occu(formula = ~1 ~ 1, data = alles)
                                                                                #adding covariates to modeling
                                                                                #constant detection, occupancy                                                                                     predicted by salt
      fm2<-occu(~1 ~salt,alles)
      occu(formula = ~1 ~ salt, data = alles)
                                                                                #constant detection, occupancy                                                                                     predicted by distancewater
      fm3<-occu(~1 ~distance.water,alles)
      occu(formula = ~1 ~ distance.water, data = alles)
      fm4<-occu(~1 ~open.shrub,alles)
      occu(formula = ~1 ~ open.shrub, data = alles)
      fm5<-occu(~1 ~closed.shrub,alles)
      occu(formula = ~1 ~ closed.shrub, data = alles)
      fm6<-occu(~1 ~watercourse,alles)
      occu(formula = ~1 ~ watercourse, data = alles)
      fm7<-occu(~1 ~tree.shrub,alles)
      occu(formula = ~1 ~ tree.shrub, data = alles)
                                                                                #summarize and compare the models
      fmlist<-fitList(p_dot_psi_dot=fm1,p_dot_psi_salt=fm2,p_dot_psi_distance.water=fm3,
      p_dot_psi_open.shrub=fm4,p_dot_psi_closed.shrub=fm5,p_dot_psi_watercourse=fm6,
      p_dot_psi_tree.shrub=fm7)
      
      modSel(fmlist)
      pred.i <- predict(fmlist, type="state")
      pred.i <- cbind(my.files[i], pred.i)
      results.predict <- rbind(results.predict, pred.i)
      
      psi.c.i <- ranef(fm1,fm2,fm3,fm4,fm5,fm6,fm7, type="state")
      show(psi.c.i)
      psi.c.i <- cbind(my.files[i], psi.c.i)
      results.psi <- rbind(results.psi, psi.c.i)
      
                                                                                #aic values
      aic<-c(fm1@AIC,fm2@AIC,fm3@AIC,fm4@AIC,fm5@AIC,fm6@AIC,fm7@AIC)

                                                                                #logit scale predictions for psi                                                                                  under each model
      lc1<-linearComb(fm1, c(1), type="state")
      lc2<-linearComb(fm2, c(1, 1), type="state")
      lc3<-linearComb(fm3, c(1, 1), type="state")
      lc4 <- linearComb(fm4, c(1, 1), type="state")
      lc5 <- linearComb(fm5, c(1, 1), type="state")
      lc6 <- linearComb(fm6, c(1, 1), type="state")
      lc7 <- linearComb(fm7, c(1, 1), type="state")

                                                                                #predictions and se under each                                                                                     model
      predictions<-c(lc1@estimate,lc2@estimate,lc3@estimate,lc4@estimate,lc5@estimate,
      lc6@estimate,lc7@estimate)
      se_predictions<-sqrt(c(lc1@covMat,lc2@covMat,lc3@covMat,lc4@covMat,lc5@covMat,
      lc6@covMat,lc7@covMat))

      aic_mods<-my_aic(labels=labels,aic=aic,predictions=predictions,se_predictions=se_predictions)
                                                                                #then apply linear functions to                                                                                    weighted average and se and                                                                                       backstransform
                                                                                #aic_mods

            results.aic.i <- cbind(my.files[i], aic_mods)
            results.aic <- rbind(results.aic, results.aic.i)

}     


write.csv (results.aic, "AIC.models.csv")
write.csv(results.predict, "Predictions.csv")
write.csv (results.psi, "Psi.conditional.csv")

I'll attach one of my data files as example. 

My aim is to make a plot of the probability of occupancy for every species at every site. I'm confused about the differences with predict and ranef. Can anybody help me with that? Which values can I use to plot? The deviations are really high in predict, why are they like that?

I'm sorry, if I'm asking silly questions, but I'm just a total beginner with R.

Kind regards 

Anna
BlueWildebeest1.occu.csv

Kery Marc

unread,
Dec 5, 2014, 9:20:53 AM12/5/14
to unma...@googlegroups.com
Dear Anna,

you say this: "My aim is to make a plot of the probability of occupancy for every species at every site." --- I can see how in a more management-oriented study, there will often be an interest that is focused on the exact list of sites studied. However, is there any lasting scientific interest in the particular sites in your study or why would you want to predict the occurrence state of each ? Wouldn't you rather want to use this sample of sites to learn something general about how the use of a site depends on site characteristics such as those embodied in your covariates ?

If the latter is what you want, then you should use predict(). However, if you really, really want to get the best possible estimate of the occurrence state (i.e., whether a species is present or absent) at each of your study sites, then you should use ranef().

A little algebra is always very useful. Here is a sketch of the model with a single covariate in occupancy probability X:

(1) The true presence/absence state (z_i) of site i is a Bernoulli (coin-flip) random variable governed by parameter psi_i, which may be modelled as a function of covariate X as in a logistic regression:

z_i ~ Bern(psi_i)
logit(psi_i) = a + b *X_i

(2) What you observe, the "presence/absence measurement" y_ij at site i during replicate j, is linked to the true presence/absence state via another logistic regression:
y_ij ~ Bern(z_i * p_ij)

Whenever p<1 (detection is imperfect), presence/absence (z) is not observed without error and you may (sometimes) want to estimate it. Technically, z is binary random effect in what can be called a Bernoulli-Bernoulli random effects (or mixed) model, so to estimate it you can use the ranef() function. However, although z is a binary thing, its estimate will be any number between 0 and 1, see page 97-98 in the MacKenzie et al. (2006, occupancy bible, section "Probability of Occupancy Given Species Not Detected at a Site").

To study the tendency for some values of the covariate X to be associated with more, or less, presences of a species, you want to look at the relationship between the parameter that governs the coin-flip-like process that produces the presences and the absences and that covariate. That parameter is occupancy probability, psi, and this always a value between 0 and 1 (i.e., psi is not binary). To characterise the system underlying the observed data you want to plot the logistic-linear relationship between psi and the covariate X.

Hope this clarifies things a little.

Finally, I permit myself to make the following comment:
Basically, your question is one about logistic regression, so you should make sure that you really understand it by reading up in books about generalized linear models. Or even more fundamental, about the distinction between the outcome of a random variable (z) and the parameter(s) governing it (psi). I think that really, really good intro books for ecologists about these fundamentals of applied stats have yet to be written. However, in the meantime, there is a lot of reading that can be done to better understand this, e.g., see books by Crawley, Zuur, McCarthy, Royle & Dorazio, Kery, MacKenzie et al., and couple of others.

I am convinced that the majority of questions on this list (and no doubt on many other, similar ones, e.g., for programs MARK or PRESENCE) are not really about the specific types of models (e.g., occupancy model, N-mixture model, CJS models ....), but rather about the fundamentals of applied statistics: things such as random variables, linear models, and generalized linear models in particular. We have a very far way to go until these concepts really sink in among ecologists and similar scientists. (And I claim that I know exactly what I write about here, since I am an ecologist !)

Kind regards  ---  Marc














 I'm confused about the differences with predict and ranef. Can anybody help me with that? Which values can I use to plot? The deviations are really high in predict, why are they like that?

I'm sorry, if I'm asking silly questions, but I'm just a total beginner with R.
"



From: unma...@googlegroups.com [unma...@googlegroups.com] on behalf of Anna Mödrath [choria...@gmail.com]
Sent: 05 December 2014 09:28
To: unma...@googlegroups.com
Subject: Re: [unmarked] Function predict in unmarked

--
You received this message because you are subscribed to the Google Groups "unmarked" group.
To unsubscribe from this group and stop receiving emails from it, send an email to unmarked+u...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Anna Mödrath

unread,
Dec 17, 2014, 2:51:14 AM12/17/14
to unma...@googlegroups.com
Dear Marc, 

I'm sorry for the late respond, but I'm just super busy with my thesis.
Your answer really helped me out and I finally got some definitions right and understood the differences between those outputs. 

Thank you very much for your fast and very useful answer! 

You're very right that I need some deeper knowledge on the general algebra and I'm working on that. 

Kind regards 

Anna
Reply all
Reply to author
Forward
0 new messages