-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Differentiating the dual variable of a linear program #261
Comments
Thoughts on this issue? It would be nice to be able to differentiate both the primal and dual solution with respect to the parameters of the problem. I'll be grateful for any help. |
You can use DiffOpt to differentiate the primal, the compute the dual with Dualization and then use DiffOpt on that dual model |
Thanks. That was what I was going to do, but that needlessly doubles the time required. The duals are produced as a side effect with the first solve anyways and it would be nice to be able to calculate the gradient with the information already computed. (This is possible as shown in https://arxiv.org/abs/1703.00443) |
That is not currently available, but it is possible to implement that in DiffOpt. PRs are welcome. |
this is also a duplicate of #94 |
Say I have a function solves linear program whose parameters are based on the arguments to the function. The function returns a sum of the norm of the optimal primal solution and the norm of the optimal dual vector of the constraints. Can I use DiffOpt to find the gradient of the return value with respect to the arguments?
I looked through the docs and the examples but it wasn't clear to me how to do this or if it is possible. Any pointers would be greatly appreciated.
The text was updated successfully, but these errors were encountered: