Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why not just crop the faces with their meta data from VoxCeleb since we already have the face bboxes? #21

Open
Cold-Winter opened this issue Apr 15, 2021 · 3 comments

Comments

@Cold-Winter
Copy link

Cold-Winter commented Apr 15, 2021

Thank you for the elegant implementation. It helps a lot!

I am wondering why you need to detect the faces from the VoxCeleb dataset since we already have the face bounding box meta data in this dataset? Are you trying to crop tighter face bboxs instead of using their boxes? What if we train the first order model with the faces cropped by their boxes?

@Cold-Winter Cold-Winter changed the title Why we need crop the face for Voxceleb dataset Why not just crop the faces with their meta data from VoxCeleb since we already have the face bboxes? Apr 15, 2021
@charan223
Copy link

Any update on this?

@brianw0924
Copy link

Same question

@HowieMa
Copy link

HowieMa commented Feb 21, 2023

Same question. Besides, it seems that the provided bounding box is not a square bounding box.
For example, the bounding box has a size of (1018 - 648, 553-48), i.e, (370, 505). However, this code directly resizes this rectangle image to a square one, as in here.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants