Skip to content

Latest commit

 

History

History
7 lines (4 loc) · 537 Bytes

README.md

File metadata and controls

7 lines (4 loc) · 537 Bytes

Grid Partitioned Attention: an efficient attention approximation with inductive bias for the image domain.

Code for the paper "Grid Partitioned Attention: Efficient TransformerApproximation with Inductive Bias for High Resolution Detail Generation", by Nikolay Jetchev, Gökhan Yildirim, Christian Bracher, Roland Vollgraf

The file GPAmodule.py contains the GPA layer definition, and an example how to apply on an image tensor.

TODO add the full generator architecture for pose morphing with attention copying from the paper.