A likely proximate reason that you're not seeing a larger vocabulary, after `build_vocab(..., update=True)`, is that your new corpus does not contain enough examples of each of the new words' usage to pass the model's `min_count` requirement.
More generally, you can't necessarily be confident, with such a tiny increment-of-training, atop an old model, that it will do more good than harm. At the same time as it it training up the new words, it's also continuing to adjust old words, that also appear in the new texts - pulling them away from their prior representations - perhaps in a way that's not balanced with their prior training examples. (The same is also occurring with the character n-grams.) In some cases, the new words might wind up with "good enough" representations with little damage to what was learned from the original training corpus... in others, your new examples might be leaving parts of the model to be less useful/comparable with regard to the original data's patterns. You should be sure you have a way to evaluate whether such ad-hoc, incremental model updates are actually helping, especially compared to the baseline alternatives of (a) just relying on FastText's inherent ability to synthesize vectors for OOV words; (b) retraining the whole model, with old & new data, including new word examples mixed throughout - ensuring an equal treatment of all words.
- Gordon