Hi,
(tokenizer.word_index['good'] , Keras_w2v_wv.vocab['good'].index) returns (76, 54)
(tokenizer.word_index['system'],Keras_w2v_wv.vocab['system'].index) returns (145, 109)
tokenizer.word_index['high'],Keras_w2v_wv.vocab['high'].index returns (34, 115).
As far as my understanding what is happening in this example is training of w2v, but the Keras model gets random initial word vectors because the indices don't match. A possible solution to this can be for the KeyedVectors object would have a method to return another object which can translate words to their corresponding embedding indices, like an already fitted Keras tokenizer or a Gensim dictionary object, or there would be a method which encodes this directly.