Yes, but it may not be as meaningful as you're hoping - as vectors that are either 'orthogonal' or even 'opposite' a vector may not necessarily match human intuitioms of what those should mean.
For strict orthogonality, you could supply a `topn` of `None` to `.most_similar()` - which then returns the raw similarities to *all* other words, in the vector sets's internal (slot) ordering (rather than in descending similarity with words attached). Those values closest to `0.0` will indicate the positions of the words with the vectors most orthogonal to the provided word (or raw vector).
(Or equivalently, a `topn` equal to the total number of words returns normal ranked results for all words. Re-sort these not by descending similarity but by ascending absolute-value of the similarity, and the top of those results will indicate the same 'most orthogonal' as the revious method.)
Similarly, since `.most_similar()` can take a raw vector, such as the negation of any actual vector, you could get the 'most-similar-to-the-opposite-direction' with something like:
neg_vec = -vec_model[target_word]
most_opposities = vec_model.most_similar(positive=[neg_vec, ])
But note:
* opposite in coordinate space is unlikely to indicate 'opposite' in human intuition - indeed, strict antonyms along certain dimensions of meaning tend to be quite similarly positioned, overall, as they refer to the same domains-of-use, with many similar neighboring words
* usual ways of training word-vectors, such as with negative-sampling and more than 1 negative-samples per positive example, tend to make the 'cloud' of word-vectors a bit lopsides with respect to the origin point. See the mention of the 'All But The Top' paper in this prior message –
https://groups.google.com/g/gensim/c/o8cDWyihuKc/m/hruB7QLHHwAJ – for more more discussion of this effect.
- Gordon