#Wav2Vec2/Whisper Fine-Tuning
The objective is to fine-tune state of the art architectures Wav2Vec2 and Whisper on Common Voice Turkish dataset and compare the results
- Machine Learning
- Data Pre-Processing
- Pytorch
- Transformers
- Librosa
- Jiwer
Whisper-small gives a WER of 22% (max-train steps 1500) Wav2Vec2 xls-r-300m gives a WER of 32%(num-epochs:30)