Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loss Functions: Paper vs Code #25

Open
Ceralu opened this issue Aug 8, 2017 · 1 comment
Open

Loss Functions: Paper vs Code #25

Ceralu opened this issue Aug 8, 2017 · 1 comment

Comments

@Ceralu
Copy link

Ceralu commented Aug 8, 2017

I am finding a difference between the loss function explained in the paper and the loss functions in the code.

For the supervised loss, in the code, I understand that minimizing loss_lab is equivalent to making T.sum(T.exp(output_before_softmax_lab)) go to 1 and also making max D(x_lab) equal to 1 for the correct label.

However, what I don't understand is the expression of loss_unl. How is it equivalent to the loss function L_unsupervised in the paper which aims to make the discriminator predict class K+1 when the data is fake and predict not K+1 when the data is unlabelled?

Edit: I accidentally clicked to submit issue before finishing writing it.
Edit: This is kind of similar to issue #14 which didn't receive any answer.

@Ceralu Ceralu closed this as completed Aug 8, 2017
@Ceralu Ceralu reopened this Aug 8, 2017
@zychen2016
Copy link

Any one could answer this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants