Hello Hank,
Here is my understanding of the three different replicated run types:
Cross-validation: Maxent makes k number of folds of your occurrence data to train and test the data. Here you will not be able to tell Maxent how many replicates you would like to run or the percentage of occurrence data you would like withheld for model validation (test occurrences). I believe this is optimal if you have a large number of species occurrences.
Subsample: Here you can set the number of replicates and the percentage to be withheld from each replicated run. This method is optimal for modelers who want to control their number of reps and percentage of withheld test occurrences and also who have moderate to many occurrences for their species of interest.
Bootstrapping: This method is optimal for modelers with few occurrences as Maxent will be allowed to test the model with occurrences that may have been used to train the model. Two potential problems with this method is that you lose statistical independence of your test and train data and your AUC values will end up slightly inflated; however, if you are limited in occurrence data, this may be your best option. I would use multiple approaches for model discrimination rather than just rely on Maxent's AUC values.
Hope this is helpful.
Best,
Tom