Redundant calculations of objective functions and constraints #725
-
I'm using automatic differentiation to calculate the derivative of my objective function and constraints. As such I effectively have four different functions:
Notice that the functions with automatic differentiation (2 and 4) calculate both the objective function/constraint and the derivatives. But when interfacing with Ipopt there's no way to tell whether I should also calculate the derivatives or just the objection/functions constraints. In my current setup I simply call |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
A reason that Ipopt does not tell in advance whether it will also need a gradient is that it typically does not know itself. In my understanding, Ipopt tries several points during line search. To decide whether to accept a point as next iterate, it will need only function values ( So whether to always compute both function values and gradient/Jacobian with the risk that the latter is thrown away, or having to essentially recompute the function values when doing the work for gradient/Jacobian only when requested, depends on how much effort these steps are and how many trial points Ipopt needs to go through before accepting a point. Maybe you can implement both and switch to what seems better according to the instance and Ipopts behavior. Maybe you can find some pattern, like Ipopt typically not requesting the gradient/Jacobian for x new points after a point where it needed these (x being an estimate on the number of unsuccessful trials in line search). |
Beta Was this translation helpful? Give feedback.
A reason that Ipopt does not tell in advance whether it will also need a gradient is that it typically does not know itself. In my understanding, Ipopt tries several points during line search. To decide whether to accept a point as next iterate, it will need only function values (
eval_f
andeval_g
). If a point is accepted, then it will need also the first and second derivatives to continue (i.e., calculate the next search direction).So whether to always compute both function values and gradient/Jacobian with the risk that the latter is thrown away, or having to essentially recompute the function values when doing the work for gradient/Jacobian only when requested, depends on how much eff…