the requirements are pretty much identical to that of LDA/LSA. All the models in gensim are O(#dimensions * #vocabulary) of memory.
So in your case, expect 2 * 4 bytes per float * 1,000 dimensions * 5.4m vocab = ~43GB of RAM. After training, run `model.init_sims(replace=True)` immediately to get rid of unnecessary objects.
The C tool will be the same, ~43GB RAM.
A 1k x 5.4m is already a fairly large model: the GoogleNews word2vec model by google was only 300 dimensions x 3m vocab. You could just about train that one on your 16GB machine.