Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.
-
Updated
Jan 10, 2025 - Python
Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.
A PyTorch implementation of MEGABYTE. This multi-scale transformer architecture has the excellent features of tokenization-free and sub-quadratic attention. The paper link: https://arxiv.org/abs/2305.07185
Add a description, image, and links to the sub-quadratic-attention topic page so that developers can more easily learn about it.
To associate your repository with the sub-quadratic-attention topic, visit your repo's landing page and select "manage topics."