Generation of word embedding in tensor2tensor-transformer model

27 views
Skip to first unread message

siddhant sharma

unread,
Jul 13, 2019, 2:08:38 AM7/13/19
to tensor...@googlegroups.com
Hi All,

I am new to tensor2tensor library and trying to understand the code flow and various essential components.

I am not able locate the python file where the word embedding are crated for the time. Are the word embedding for all tokens created while the generate data method is called? The class or method which tokens in a word or sentence and generates the word embedding.

Any guidance and clue in order to understand where word embedding are created for the first and at what stage it would be helpful for me to connect the dots.


Reverences 
Siddhant Sharma

Martin Popel

unread,
Jul 13, 2019, 1:42:57 PM7/13/19
to siddhant sharma, tensor2tensor
Hi Siddhant,
by default, word embeddings are treated as all other weights in the Transformer model,
i.e. they are initialized randomly and trained together with the whole network via backpropagation.

Martin
> *Reverences*
> *Siddhant Sharma*
>
> --
> You received this message because you are subscribed to the Google Groups
> "tensor2tensor" group.
> To unsubscribe from this group and stop receiving emails from it, send an email
> to tensor2tenso...@googlegroups.com.
> To post to this group, send email to tensor...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/tensor2tensor/CAMQW_1DWtUTiHYmCLyBGqb38J0vLg1h0YJ_9rORKgkme2ukc1g%40mail.gmail.com.
> For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages