Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch
-
Updated
Oct 25, 2024 - Python
Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch
Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch
Efficient Infinite Context Transformers with Infini-attention Pytorch Implementation + QwenMoE Implementation + Training Script + 1M context keypass retrieval
Unofficial PyTorch implementation of the paper "cosFormer: Rethinking Softmax In Attention".
The official PyTorch implementation for CascadedGaze: Efficiency in Global Context Extraction for Image Restoration, TMLR'24.
Pytorch implementation of "Compact Global Descriptor for Neural Networks" (CGD).
Implementation of: Hydra Attention: Efficient Attention with Many Heads (https://arxiv.org/abs/2209.07484)
Nonparametric Modern Hopfield Models
Official Implementation of SEA: Sparse Linear Attention with Estimated Attention Mask (ICLR 2024)
Minimal implementation of Samba by Microsoft in PyTorch
Add a description, image, and links to the efficient-attention topic page so that developers can more easily learn about it.
To associate your repository with the efficient-attention topic, visit your repo's landing page and select "manage topics."