You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Say I have a function solves linear program whose parameters are based on the arguments to the function. The function returns a sum of the norm of the optimal primal solution and the norm of the optimal dual vector of the constraints. Can I use DiffOpt to find the gradient of the return value with respect to the arguments?
I looked through the docs and the examples but it wasn't clear to me how to do this or if it is possible. Any pointers would be greatly appreciated.