Do you mean, generate a text (series of word-tokens) from a vector?
I know some deep/recurrent language models can do this from their own native summary vectors, and may even come up with vaguely grammatical texts. But `Doc2Vec` is pretty shallow/simple-minded. I believe the best you could hope for would be some indication of which words are most indicated by a vector, and even that would be more likely/interpretable only from some model types.
For `Word2Vec`, there's an experimental `predict_output_word()` method, that when given a context word or words, runs the same model forward-propagation as is used during training to report the words most-predicted by the model. It only works for negative-sampling models, and doesn't apply quite the same context-weighting as is enforced during training, but it may be of interest as a possible approach - since `Doc2Vec` works very similar to `Word2Vec`. You can view its source at:
The top-N most-activated word-output-nodes of a `Doc2Vec` vector *might* be a reasonable, but non-grammatical, synthetic text for a given doc-vector.
(Of course if the model was trained to include many unique-ID doctags, for known texts, then just the usual `most_similar()` operation would suggest which known-texts are similar to a given new query vector. You could also consider somehow mixing those top-N known-texts together to synthesize a plausible text for the new vector, perhaps using just repeated words or words that are themselves 'close to' other words in the superset of words of all known-nearby-texts.)
- Gordon