Note that Doc2Vec doesn't need word-vectors as an input – it will create any that are needed during model/doc-vector training. (And, pure PV-DBOW doesn't use/train word-vectors at all.) And word-vectors from your domain's data might be better than generic vectors – more representative of local word-senses & frequencies.
That said, there's an experimental method in class Word2Vec (inherited by Doc2Vec) called `intersect_word2vec_format()`. It will scan a word-vector file in the format as output by the Google word2vec.c tool, and for any word that is *already* in the model's known vocabulary, replace the model's word-vector weights with those from the file, *and* lock those weights against further changes. The idea is that after establishing your model's vocabulary (by `build_vocab()`), you might do this to bring in known frozen vectors – then proceed with training that only adjusts the non-imported words. There's no real evidence about whether or when it might help. You can search the forum archives for the method name for more discussion.
Especially in the Doc2Vec case, using a 'space' only initially created on word-vecs might overly restrain the expressiveness of doc-vecs, whereas a joint-training would have earned ranges-of-values that are more helpful for the doc-vectors more 'room' in the space.
Also, the word2vec.c-format doesn't include the 'output' weights, so (whether doing a fill `load_word2vec_format()` or the intersect mentioned above), the resulting model isn't fully conditioned for more compatible training/inference. (The `syn1neg` or `syn1` layer is still all zeros.) Only after more bulk training (on relevant text examples) would the model re-learn the predictiveness that gave rise to the imported vectors – and thus perhaps become useful for training new compatible doc-vecs.
As should be clear from the above, you'd be in experimental territory with such techniques. You'd want to exmine the source-code and internal model-state closely, and probably directly adjust it at times, to understand which improvised mash-up steps are helping your end goals and which aren't.
- Gordon