-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why is my duplicated wavLM results on vox1-o is 30% worse #28
Comments
In my experiment, the wavlm_large_finetune EER is 0.574. |
Hi @AIDman , As for the environment error, could you replace this line
with self.feature_extract = torch.hub.load('s3prl/s3prl:e52439edaeb1a443e82960e6401ae6ab4241def6', feat_type) and try again? The fairseq library is not necessary for inference WavLM model. As for the older version of s3prl, it can automatically skip the Import Error from fairseq, but the latest version of s3prl code would accidentally raise an ImportError.
As for the fine-tuning results for speaker verification, we use the adaptive snorm to normalize the trial scores and further apply the quality-aware score calibration as introduced in Section V.C-3 of our WavLM paper. |
Can you provide the code for the quality-aware score calibration?Thank you! |
The above results are the validation results of your shared wav_lm models on the original Vox1-o data without changing any code.
What might be the reason for this gap? Wrong settings?
Here is more background about my setting:
The following error will appear:
Then I installed the environment manually (installed around 30~40 tools) just as #26
pip list | grep fairseq
fairseq 0.12.1 /home/user1/tools/fairseq
pip list | grep s3prl
s3prl 0.3.1
torch.version: 1.9.0+cu102
python -V: 3.8.13
Thanks for your wonderful work and looking forward for your help.
The text was updated successfully, but these errors were encountered: