You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have recently started using ipopt through python and I conda installed the cyipopt package as well as the ipopt solver through conda forge.
I copy-pasted the example code on cyipopt documentation page and it worked fine. However, when I tried a simple optimization problem, the solver gives weird solution.
The simple problem that I tried was: minimize x[0] + x[1], subject to x[0]-x[1]<= 0, x[0]>=0.
Obviously, the solution to this problem should be 0.
I defined the problem in the same way as suggested on the cyipopt page:
class HS073(cyipopt.Problem):
def objective(self, x):
"""Returns the scalar value of the objective given x."""
return(x[0]+x[1])
def gradient(self, x):
"""Returns the gradient of the objective with respect to x."""
return(np.array([1,1]))
def constraints(self, x):
"""Returns the constraints."""
return(np.array([x[0]-x[1]]))
def jacobian(self, x):
"""Returns the Jacobian of the constraints with respect to x."""
return(np.array([1,-1]))
def intermediate(self, alg_mod, iter_count, obj_value, inf_pr, inf_du, mu,
d_norm, regularization_size, alpha_du, alpha_pr,
ls_trials):
"""Prints information at every Ipopt iteration."""
iterate = self.get_current_iterate()
infeas = self.get_current_violations()
primal = iterate["x"]
jac = self.jacobian(primal)
msg = "Objective value at iteration #{:d} is - {:g}"
print(msg.format(iter_count, obj_value))
print("Iteration:", iter_count)
print("Primal iterate:", primal)
print("Flattened Jacobian:", jac)
print("Dual infeasibility:", infeas["grad_lag_x"])
Then, I added the bounds and formulate the problem:
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Dear Ipopt community,
I have recently started using ipopt through python and I conda installed the cyipopt package as well as the ipopt solver through conda forge.
I copy-pasted the example code on cyipopt documentation page and it worked fine. However, when I tried a simple optimization problem, the solver gives weird solution.
The simple problem that I tried was: minimize x[0] + x[1], subject to x[0]-x[1]<= 0, x[0]>=0.
Obviously, the solution to this problem should be 0.
I defined the problem in the same way as suggested on the cyipopt page:
class HS073(cyipopt.Problem):
Then, I added the bounds and formulate the problem:
lb = [0,0]
ub = [2e10,2e10]
cl = [-2e10]
cu = [0]
x0 = [1,2]
nlp = HS073(
n=len(x0),
m=len(cl),
lb=lb,
ub=ub,
cl=cl,
cu=cu,
)
Finally, I solve the problem:
nlp.add_option('mu_strategy', 'adaptive')
nlp.add_option('tol', 1e-10)
x, info = nlp.solve(x0)
This is the output. After 1 iteration, the solution is not making sense:
Objective value at iteration #0 is - 3
Iteration: 0
Primal iterate: [1. 2.]
Flattened Jacobian: [ 1 -1]
Dual infeasibility: [1. 1.]
Objective value at iteration #1 is - 9232.62
Iteration: 1
Primal iterate: [2308.15382761 6924.4615058 ]
Flattened Jacobian: [ 1 -1]
Dual infeasibility: [-3807.1261665 -5330.17863752]
Objective value at iteration #2 is - 32831.6
Iteration: 2
Primal iterate: [ 9283.95016194 23547.67587445]
Flattened Jacobian: [ 1 -1]
Dual infeasibility: [-10191.60234577 -13407.40835721]
Objective value at iteration #3 is - 34372.2
The iterations continued and reached 3000 steps and the final answer is this:
Objective value at iteration #3000 is - 28980.6
Iteration: 3000
Primal iterate: [ 6848.26950247 22132.29693556]
Flattened Jacobian: [ 1 -1]
Dual infeasibility: [0.99999354 0.99999283]
What is going on? I don't see any mistakes in my code. Could it be this simple problem is not suitable for ipopt? Or are there any bugs in cyipopt?
I hope someone can help me with this confusion. Helps are appreciated!
Beta Was this translation helpful? Give feedback.
All reactions