--
You received this message because you are subscribed to the Google Groups "Nematus Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nematus-suppo...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/nematus-support/9d431bc0-4869-47c4-ae19-dcf01ea26f51%40googlegroups.com.
Hi Mumin,currently the code assumes that the embedding size will be the same as the hidden size (as in the original Transformer paper) and fails if they are set to different values. We plan to change this in the near future so that the two sizes can be configured independently.Best wishes,
Phil
On 13 Jan 2020, at 14:33, Mohammad Mumin <mumin...@gmail.com> wrote:
Dear Sir,Is it necessary to keep embedding layer size and hidden layer size same for transformer architecture?Or I can differ size as you have used embedding size 512 and hidden layer size 1024 for a BiDeep RNN in one of your articles.Thanks in advance.Regards,Mumin.--
You received this message because you are subscribed to the Google Groups "Nematus Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nematus...@googlegroups.com.