-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some questions about your paper #385
Comments
Hi @Pixie8888, Regarding your first question, after training the generative network in the first stage, we can generate a facial image in the second stage given the eyes-open condition c_{s,eyes}. We then use a pre-trained 2D explicit facial keypoint model to extract the distance between the upper and lower eye points, aligning this distance with c_{s,eyes}. The process for the lips is similar. For your second question, equation 7 includes logic for estimating relative motion, which you can think of as a redirection operation. |
Thanks for your reply! |
This condition is the input to this MLP. This condition is the input to the MLP network, as shown in Fig. 3 of our paper. c_{s, eyes} is estimated from the driving image I_s by explicit facial 2D keypoints. The output of this network is added as an increment to the main network's output, and all of this happens before the generator. |
Thanks! |
Dear authors,
I have some questions about your paper:
The text was updated successfully, but these errors were encountered: