Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adv Loss is not supported by the paper??? #3

Open
xljhtq opened this issue Jun 29, 2018 · 4 comments
Open

Adv Loss is not supported by the paper??? #3

xljhtq opened this issue Jun 29, 2018 · 4 comments

Comments

@xljhtq
Copy link

xljhtq commented Jun 29, 2018

Hi, I want to know where the adv loss is different from the domain loss??
In another word, the adv loss in the paper "Adversarial Multi-task Learning for Text Classification" has not described clearly. So i want to know what the equation is??

@xljhtq xljhtq changed the title Adv Adv Loss is not supported by the paper??? Jun 29, 2018
@FrankWork
Copy link
Owner

total_loss = task_loss + adv_loss + diff_loss + l2_loss

@xljhtq
Copy link
Author

xljhtq commented Jul 2, 2018

@FrankWork In your code, "total_loss = task_loss + adv_loss + diff_loss + l2_loss" , to minimize the total_loss, then the adv_loss will be decreasing. But in reality, we should let adv_loss increase in order to get the shared feature.
So what I should do to maximize the adv_loss or minimize the adv_loss ???

@FrankWork
Copy link
Owner

there is a function flip_gradient to maximize the adv_loss

@ammaarahmad1999
Copy link

Hi, do you know equivalent function of flip_gradient in pytorch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants