-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adv Loss is not supported by the paper??? #3
Comments
total_loss = task_loss + adv_loss + diff_loss + l2_loss |
@FrankWork In your code, "total_loss = task_loss + adv_loss + diff_loss + l2_loss" , to minimize the total_loss, then the adv_loss will be decreasing. But in reality, we should let adv_loss increase in order to get the shared feature. |
there is a function |
Hi, do you know equivalent function of flip_gradient in pytorch |
Hi, I want to know where the adv loss is different from the domain loss??
In another word, the adv loss in the paper "Adversarial Multi-task Learning for Text Classification" has not described clearly. So i want to know what the equation is??
The text was updated successfully, but these errors were encountered: