--
You received this message because you are subscribed to the Google Groups "Gensim" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gensim+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Thanks for the reply!So model.wv is the trained word embedding. Then what is model.syn1neg if we use negative sampling?
On Sat, Aug 18, 2018 at 9:47 AM Gordon Mohr <> wrote:
After training Word2Vec, you have a trained model, which includes word-vectors and internal neural-network weights. There's no one 'context embedding', nor is it typical to re-create the context-embeddings that were created during training, at least not explicitly as that.--However, in skip-gram, the contexts used for training are just: individual other word-vectors from within the configured `window`. So in your specific case, the question is equivalent to "how do I get a word's embedding (word-vector)?". And so if the context word is 'apple', its vector (which is also the context-embedding used as input to predict nearby words) is just:model.wv.['apple']- Gordon
On Saturday, August 18, 2018 at 2:49:33 AM UTC-7, Sun Mingjie wrote:Hi,I am new to gensim. I am using Skip-Gram with negative sampling. Could you tell me how to get the context embedding after training on a corpus?Thanks,Mingjie
You received this message because you are subscribed to the Google Groups "Gensim" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gensim+unsubscribe@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to gensim+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "Gensim" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gensim+un...@googlegroups.com.