For your local machine, unless you're using a 64-bit Python and have at least 8GB RAM, it will be hard to work with a 3GB vector file.
You could try the optional argument to `load_word2vec_format()`, `limit`, which when given a number will only read that many vectors from the front of the supplied file. (As such files are usually organized to put the more-frequent words 1st, the later words are usually of much less value.) Loading just the first 100000 or 500000 words might save a lot of memory and yet still be fine for your other purposes.
Calling `init_sims()` will only help reduce memory usage if you use the `init_sims(replace=True)` option – which discards the raw-magnitude vectors in favor of the unit-normalized vectors that are used for most-similar measurements.
I've seen others with issues on Google Cloud but don't recall/know any specific workarounds, sorry.
- Gordon