Hi Joao,
I had the same problem not just when using Google embeddings, but with any set of word embeddings. We came up with a python-based solution that uses lazy loading of individual word vectors, which speeds up things considerably, part from some other advantages:
https://github.com/nlpAThits/WOMBAT
You need to import your resource first, which might still take some time, but things are much faster from then on.
You also need to have the resource in plain text format. There are scripts out there that do that for you (i can also provide one).
Converting the resource to plain text prior to import also allows you to do some filtering of the vocabulary: The GoogleNews embeddings contain a *huge* part of phrases that are not really meaningful, and will never be used anyway unless you have a tokenizer
that is aware of these phrases.
Best,
Christoph