According to my understanding the confidence interval is different from the quantile. The issue appeared when I was applying the null model approach of Raes & Steege, 2007 (to create a null 'random' model to evaluate the predictive accuracy of maxent 'real' model)
In that paper they wrote [assessed the 95% C.I. upper limit of AUC value, by ranking the 999 AUC values and selecting the 949th value (0.95*999=949)]. Another paper used the same approach wrote [ For each group of 500 null models the average AUC and the 95% Confidence Interval (CI) is calculated. The AUC of each ‘real’ species is then compared with the 95% CI of the ‘random’ species; if the AUC of the real species is higher than the 95% quantile value, this species is significantly different from random.]
If someone can discuss this here it would be great.