Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Details on calculating principal direction vectors for attributes #16

Open
VSehwag opened this issue Apr 27, 2020 · 3 comments
Open

Details on calculating principal direction vectors for attributes #16

VSehwag opened this issue Apr 27, 2020 · 3 comments

Comments

@VSehwag
Copy link

VSehwag commented Apr 27, 2020

Hi,

Thanks for releasing such a well-written code and the interactive demo for the paper. Even when I am testing it on real-world images, the reconstruction and semantic changes are working very well.

However, I was wondering whether you are planning to release more details on the semantic editing part. In particular, I couldn't find the details on how the principal direction vectors are calculated in the paper. Surprisingly, the paper doesn't have any results on semantic editing, i.e., the ones demonstrated in the demo.

I wonder whether you are planning to release any additional documents including these details? Or is it a generic methodology well-known in the community? I am relatively new in this domain and not sure about it.

Thanks.

@jychoi118
Copy link

jychoi118 commented May 15, 2020

I think principal directions are acquired following this readme.
[principle_direction_read_me]

First generate images, then predict attribute scores with pre-trained classifiers for CelebA-HQ, then train SVM for each attributes. This method looks same as the InterFaceGAN literature. [InterFaceGAN]

I'm curious why this paper didn't reference InterFaceGAN literature...

@veeramakalivignesh
Copy link

Hi, I am trying to train the model for the dataset of Mammogans (breast x-ray images) and I want to have some control over important features like we can do in the interactive demo for celeba-hq.

According to @jychoi118, the scripts here predicts attribute scores with pre-trained classifiers for celeba-hq. Can someone throw some light on this? Have the used a labeled dataset to do this?

@liujingwen-bmil
Copy link

I think principal directions are acquired following this readme. [principle_direction_read_me]

First generate images, then predict attribute scores with pre-trained classifiers for CelebA-HQ, then train SVM for each attributes. This method looks same as the InterFaceGAN literature. [InterFaceGAN]

I'm curious why this paper didn't reference InterFaceGAN literature...

Thanks for your reply, could you please tell me more details about how can I get my pre-trained classifiers for my own dataset(not the face) and how to train a new svm. Maybe any project could help. Thanks again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants