local_attention_2d vs unmasked_local_attention_2d_tpu
9 views
Skip to first unread message
Sumeet Singh
unread,
Oct 23, 2019, 8:13:34 PM10/23/19
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to tensor2tensor
Hi All,
I am writing a new model for images that uses 2D Local Self Attention for the encoder as described in the Image Transformer paper but unmasked i.e. a query position can attend to the entire memory block. Therefore I am trying to leverage code in common_attention.py regarding which I have the following questions:
Is common_attention.local_attention_2d unmasked local 2d attention? It seems that way from the code, but the comments at the top of the function state that the memory flange is only added to left, top, and right of the query block. The source code seems to add it to all sides (left, top, right and bottom) of the query block.
unmasked_local_attention_2d_tpu is supposed to be unmasked 2D local attention. So that's great, however other than having different code, it seems to accomplish the same as the above. Is that true? If so, why another function to do the same thing? I also don't see any TPU specific code in there.
If any of the T2T code maintainers can respond, I'll be greatly appreciative.