I'm not Mr. Phillips, but I sense you need this information sooner rather than later and I've not seen an answer come across since you asked. Maybe this'll further help the conversation, so I'll give an attempt to answer this and *anyone / everyone* please point out where I've gone wrong, fill in blanks, make suggestions etc.
The fixed value thresholds you can probably ignore if you are investigating a biological phenomenon such as distribution. They may be interesting from a statistical point of view but do not reflect any biological meaning.
The minimum training presence (you wrote maximum by accident) means that the threshold is set so that no training sample will be excluded. Use this only if you have confidence in the validity of all your training dataset, especially those at the edge of the range. Often used in settings where being conservative is the 'preferred approach'
such as in invasive species modelling, where being wrong on the side of caution is less worse than the alternative. But -- not suitable if you are trying to identify native suitable habitat -- probably over-estimates the range by a good margin with any realistic dataset.
Ten percentile training presence means that the threshold will identify the top 90% of your training samples --- a better choice if you have less than full confidence in your training set. Some will be missed -- it's up to you to decide whether that's okay or not. Probably better suited to native habitat estimation.
Equal training sensitivity and specificity -- at this threshold the chance of missing suitable distribution and assigning unsuitable distribution is the same. Gives a decent 'average' perhaps. Are there specific situations where this is optimal?
Maximum training plus sensitivity -- here the threshold maximises both the chance of erroneously assigning unsuitable distribution and missing suitable distribution -- not quite the same as the previous because the chance of doing each is not necessarily the same, but is maximised for both. Better than the previous??? No idea.
Equal test ... etc. -- these are the same as the two previous ones above, but refer to the test samples you used, not the training samples.
Equal entropy of threshold -- I could guess, but would rather not -- anyone else????
I generally use a threshold somewhere between 0% and 10% of the training dataset assigned incorrectly. It all depends on my confidence in the points used in training, but my application is to answer the question: "Does it have a chance of survival in Canada?", not "Where specifically in Canada could it survive?" -- in quarantine plant pest modelling, where keeping something harmful out by accident is preferable to letting it in by accident.
Martin