Recent days, I want to figure out what has changed in the word2vec, when I used the non-static cnn channel in the paper "Convolutional Neural Networks for Sentence Classification keras". And I find that it has been perfactly implemented in keras <
https://github.com/alexander-rakhlin/CNN-for-Sentence-Classification-in-Keras>.
So my question is, when we use the pre-trained word embedding vector as the embedding layer's weights, and it will be fine tuned by the back-propagation. So how to set the embedding layer weights (has been trained by the BP) as word2vec model weights and calculate the cosine similarity after bp training. (Like in the paper, the right part in table 3, how to get that)
Is that any way to achieve that?