Skip to content

Latest commit

 

History

History
25 lines (16 loc) · 472 Bytes

README.md

File metadata and controls

25 lines (16 loc) · 472 Bytes

#Wav2Vec2/Whisper Fine-Tuning

-- Project Status: [Completed]

Project Objective

The objective is to fine-tune state of the art architectures Wav2Vec2 and Whisper on Common Voice Turkish dataset and compare the results

Methods Used

  • Machine Learning
  • Data Pre-Processing

Libraries

  • Pytorch
  • Transformers
  • Librosa
  • Jiwer

Results:

Whisper-small gives a WER of 22% (max-train steps 1500) Wav2Vec2 xls-r-300m gives a WER of 32%(num-epochs:30)