You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Thanks for sharing such an amazing project! As for the pre-processing, it is said that you follow the video-preprocessing scripts. However, in their codes, they use the bounding box as shared in the meta-csv file, which is different from the original bounding box from the VoxCeleb1 dataset, as discussed in issue21.
Therefore, may I ask which bounding box you use to obtain the cropped images?
Besides, the original VoxCeleb1 dataset contains 1k+ subjects in the training set and 40+ subjects in the testing set.
However, the meta-file of video-preprocessing only provides 400+ subjects, which is a subset of the original VoxCeleb1 dataset.
Thus, may I ask which dataset you used in the training examples? The dataset with 1k+ subjects or the subset from video-preprocessing?
Looking forward to your reply. Thanks in advance!
The text was updated successfully, but these errors were encountered:
Hi,
Thanks for sharing such an amazing project! As for the pre-processing, it is said that you follow the video-preprocessing scripts. However, in their codes, they use the bounding box as shared in the meta-csv file, which is different from the original bounding box from the VoxCeleb1 dataset, as discussed in issue21.
Therefore, may I ask which bounding box you use to obtain the cropped images?
Besides, the original VoxCeleb1 dataset contains 1k+ subjects in the training set and 40+ subjects in the testing set.
However, the meta-file of video-preprocessing only provides 400+ subjects, which is a subset of the original VoxCeleb1 dataset.
Thus, may I ask which dataset you used in the training examples? The dataset with 1k+ subjects or the subset from video-preprocessing?
Looking forward to your reply. Thanks in advance!
The text was updated successfully, but these errors were encountered: