Hello,
I'm trying to get the vocab size of the model I just trained with
```
model = KeyedVectors.load_word2vec_format(fname, binary=True, unicode_errors='ignore')
print(len(model.index_to_key))
```
This prints out 34,521,720
Then I train another model on a bigger corpus, with far more words,
But then when I try to get it's size in the same way I still get 34,521,720
I was wondering if there is a hard limit on the size of the vocab
or if there is a bug somewhere.
Model 1 takes around 11 GB of disk space
Model 2 takes around 20 GB of disk space
Model 2 has extra vocabulary words as expected.
The vectors size is 152 in both models.
Thank you for your help!
Danilo