As per the original Word2Vec papers & word2vec.c implementation, looking up a word returns the vector that contributes to the *inputs* to the neural-network. (In skip-gram, one word's vector is the entire NN input for a single training example; in CBOW, many words' vectors are summed or averaged to form the NN input for a single training example.)
Such output-vectors are most clearly identifiable inside the model when using negative-sampling, where each predictable word has its own output-node with `vector_size` weights leading in. There's no formal gensim API for accessing those by string-key, but you could manually pull them from `model.syn1neg`, using the same word-to-index mappings as are used to find the `syn0` vectors. (I don't know any similarly-tidy way to pull such an 'output vector' for individual words from a hierarchical-softmax model. There might be a way to calculate one, but I'm not sure of the practicality/utility of such an exercise.)
- Gordon