Additional 'acceptability' tests? #762
Unanswered
gelatinouscube42
asked this question in
Q&A
Replies: 1 comment 1 reply
-
I am not aware of any plans to extend the acceptable termination criteria. For such things there are many different ideas when the solver should be stopped, and it would make the code very complex to cover all. Instead, the intermediate_callback should be used to implement own stopping criteria. In your case, setting acceptable_tol to a high value and acceptable_constr_viol_tol to a low value would not be sufficient to make Ipopt stop when it is in a feasible area? Or will the objective value be too far from the "known limit" in this case? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi all,
tldr: Are there plans for and/or would there be interest in implementing an "acceptable convergence," condition for solutions that are primal feasible, and unlikely to improve?
Details:
I have an applied problem,* for which I know analytically that the objective function is bounded below by 0. I have the solver routinely getting to solutions that are "practically," close enough, but I can't get the dual infeasibility to shrink to acceptable levels. Probably, my problem formulation is sufficiently nonconvex/ill-conditioned in the region of the optimum where no amount of options-tinkering will get the dual variables to solve better (open to suggestions though!).
This leads me to the following ask: are there plans for adding additional "acceptable convergence," criteria? For my particular problem, the high dual infeasibility de facto forces the 'acceptable_tol' parameter to be high. While I could perhaps set that high and then lower the 'acceptable_obj_change_tol' parameter, this is in practice not resulting in returning from the routine faster - the objective value tends to jitter a bit between iterations.**
The "practical convergence" conditions I moreso care about are: (i) primal feasibility being low and (ii) (unscaled) objective value being close to a "known limit." One could imagine implementing a condition where the algorithm returns if it has been "n_iters_since_good," iterations since the algorithm found such a solution, and then return the best solution found so far (with low primal constraint violations). Obvious variations on this include: number of iterations since the "best primal feasible solution," has been improved.
Would there be any interest in implementing such a feature? If so, any guidance on where to look/how to get started? I would personally be interested in seeing this done, so would be willing to pitch in if given sufficient oversight.
** There may be a way to fiddle with the line search options to prevent this from happening, haven't spent too much time thinking on that particular dimension yet. My bigger point, I think, still holds.
Beta Was this translation helpful? Give feedback.
All reactions