PyTorch implementation of Nextformer: A ConvNeXt Augmented Conformer For End-To-End Speech Recognition.
Conformer models have achieved state-of-the-art (SOTA) results in end-to-end speech recognition. However Conformer mainly focuses on temporal modeling while pays less attention on time-frequency property of speech feature. Authors has augment Conformer with ConvNeXt and propose Nextformer structure, they stacks of ConvNeXt block to replace the commonly used subsampling module in Conformer for utilizing the information contained in timefrequency speech feature. Besides, they insert an additional downsampling module in middle of Conformer layers to make Nextformer model more efficient and accurate.
This repository contains only model code.
This project recommends Python 3.7 or higher. We recommend creating a new virtual environment for this project (using virtual env or conda).
If you have any questions, bug reports, and feature requests, please open an issue on github or
contacts [email protected] please.
I appreciate any kind of feedback or contribution. Feel free to proceed with small issues like bug fixes, documentation improvement. For major contributions and new features, please discuss with the collaborators in corresponding issues.
I follow Black for code style. Especially the style of docstrings is important to generate documentation.
- Nextformer: A ConvNeXt Augmented Conformer For End-To-End Speech Recognition
- A ConvNet for the 2020s
- Conformer: Convolution-augmented Transformer for Speech Recognition
- sooftware/conformer
- facebookresearch/ConvNeXt
- Nguyen Van Anh Tuan @tuanio
- Contacts: [email protected]