Whats the difference between initializing a Keras Embedding layer using "embeddings_initializer" versus setting "weights" directly as in:
# create embedding matrix (from pretrained embeddings for example)
embeding_matrix = ...
# create the embedding layer this way
embedded_sequences = Embedding(num_words,
EMBEDDING_DIM,
----> embeddings_initializer=Constant(embedding_matrix),
input_length=MAX_SEQUENCE_LENGTH,
trainable=False,
)(sequence_input)
# or this way?
embedded_sequences = Embedding(num_words,
EMBEDDING_DIM,
----> weights=[embedding_matrix],
input_length=MAX_SEQUENCE_LENGTH,
trainable=False,
)(sequence_input)
Are they the same? Is one better than the other?