A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction
-
Updated
Jul 4, 2024 - Python
A Dual-Stage Attention-Based Recurrent Neural Network for Time Series Prediction
An attention based approach to convert Indian Sign Language to Text using simulated hand gesture data
The attention heads in the Transformer architecture possess a variety of capabilities. This is a carefully compiled list that summarizes the diverse functions of the attention heads.
GPT-based protein language model for PTM site prediction
Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch
Pure C multi modal 3D Hybrid GAN using Cross attention, attention and convolution
LSTM-ARIMA with Attention and multiplicative decomposition for sophisticated stock forecasting.
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
Hybrid Ensemble Approach For 3D Object Reconstruction from Multi-View Monocular RGB images
Tensorflow implementation of a 3D-CNN U-net with Grid Attention and DSV for pancreas segmentation trained on CT-82.
The Enterprise-Grade Production-Ready Multi-Agent Orchestration Framework Join our Community: https://discord.com/servers/agora-999382051935506503
Attention-guided Feature Distillation for Semantic Segmentation
This is an unofficial Pytorch implementation of the Infini Attention mechanism introduced in the paper : "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention". Note that the official code for the paper has not been released yet. In case of issues, add a PR (add an explanation of the changes made and why so?)
YOLOv8-AM: YOLOv8 with Attention Mechanisms for Pediatric Wrist Fracture Detection
Learn Generative AI with PyTorch (Manning Publications, 2024)
Generative_Image_Rotation: Using Pix2Pix cGAN to transform randomly oriented Protoplanetary Disk images into standardized face-on views for astronomical research.
A simple but complete full-attention transformer with a set of promising experimental features from various papers
A Decoder-only Transfomer model for text generation.
An introduction to attention mechanisms and the vision transformer
Contrastive-LSH Embedding and Tokenization Technique for Multivariate Time Series Classification
Add a description, image, and links to the attention-mechanism topic page so that developers can more easily learn about it.
To associate your repository with the attention-mechanism topic, visit your repo's landing page and select "manage topics."