Hi Gabe, apologies for the late response!
> My understanding is if we train more less diverse images, i.e. images with frequent labels like Sand and Gravel, the training set randomly chosen will increasingly have more and more Sand and Gravel and less of the rarer labels represented.
Correct, more and less as a proportion of the whole training set. However, in theory at least, adding more (accurately labeled) Sand and Gravel training points should not diminish the machine's ability to distinguish those labels from the rarer labels. But yes, it can skew the accuracy measurement in a way you might not want.
> I understand that 1/8th of the trained images are examined randomly, but is there anyway for the classifier to be triggered by your team to look at another 1/8th?
Unfortunately, we don't have a straightforward way to change how the 1/8th is picked, even from the admin side.
We're generally working towards enabling more flexibility with using classifiers, so in the near future we'll likely provide some method to relax the criteria for saving a new classifier - such as ignoring the accuracy check. For now, that method doesn't exist yet. However, there is a workaround that some folks have been using to save a new classifier whenever they want:
When you add a label to / remove a label from your source's labelset, that will start a process where the source's existing classifier history gets deleted, and a new classifier gets trained. This new classifier will use all of the currently available training data in the source. So what you'd do is add any label to your labelset, then remove that label right after. CoralNet should say it's resetting the classifiers twice, but if you do both actions within a few minutes, you should only have to wait for a single re-train.