You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been working on a project with DiffOpt where I need to query the sensitivity of variables wrt parameters. I managed to make a MWE of the issue with just JuMP and DiffOpt, but I'm still not sure what the underlying problem is:
Consider the problem below where for 0 < p < 1, there is exactly one optimal active set -> the x ≥ p constraint(s) are active and the bounds on x are inactive. So I expect that the W extracted below should be exactly the identity matrix (scaled by the perturbation).
Note I am using the new POI integration from #262.
For one parameter (P = 1), everything works as expected:
using JuMP, DiffOpt, Ipopt
P =1
m =Model(() -> DiffOpt.diff_optimizer(Ipopt.Optimizer, with_parametric_opt_interface=true))
@variable(m, 0.0≤ x[1:P] ≤1.0)
@variable(m, p[1:P] ∈Parameter.(0.5))
@constraint(m, x .≥ p)
@objective(m, Min, sum(x))
optimize!(m)
@assertis_solved_and_feasible(m)
W =zeros(length(x), length(p))
for i in1:length(p)
DiffOpt.empty_input_sensitivities!(m)
δ =zeros(length(p))
δ[i] =1.0# to compute sensitivity wrt p[i]
MOI.set.(m, DiffOpt.ForwardConstraintSet(), ParameterRef.(p), Parameter.(δ))
DiffOpt.forward_differentiate!(m)
for j in1:length(x)
W[j, i] = MOI.get(m, DiffOpt.ForwardVariablePrimal(), x[j])
endend
W
# 1×1 Matrix{Float64}:# 1.0
But setting P = 2, we get:
# 2×2 Matrix{Float64}:# 1.0 1.0# -0.0 1.0
where it seems the first column is correct, but there is an extra 1 in the second column first row. With larger P the same pattern continues, e.g. for P=5:
Interestingly, the same script but using the new NLP backend in #260 does match what I expect! To force the use of the NLP backend, I switch to that branch and pass with_parametric_opt_interface=false:
m =Model(() -> DiffOpt.diff_optimizer(Ipopt.Optimizer, with_parametric_opt_interface=false))
using JuMP, DiffOpt, Ipopt
P =2
m =Model(() -> DiffOpt.diff_optimizer(Ipopt.Optimizer, with_parametric_opt_interface=true))
@variable(m, x)
@variable(m, p ∈Parameter(1.0))
@variable(m, q ∈Parameter(2.0))
@constraint(m, x ≥ p)
@constraint(m, x ≥ q)
@objective(m, Min, x)
optimize!(m)
@assertis_solved_and_feasible(m)
functionget_sensitivity(m, xᵢ, pᵢ)
DiffOpt.empty_input_sensitivities!(m)
MOI.set(m, DiffOpt.ForwardConstraintSet(), ParameterRef(pᵢ), Parameter(1.0))
DiffOpt.forward_differentiate!(m)
return MOI.get(m, DiffOpt.ForwardVariablePrimal(), xᵢ)
end
sp1 =get_sensitivity(m, x, p)
sp2 =get_sensitivity(m, x, q)
sp3 =get_sensitivity(m, x, p)
@assert sp1 ≈ sp3 # fails@assert sp2 ≠ sp3 # fails
Calling empty!(unsafe_backend(m).optimizer.diff.model.input_cache) in-between calls to get_sensitivity seems to fix it... Looks like there are actually two input_cache, one under unsafe_backend(m).optimizer -- which is cleared with empty_input_sensitivities -- and the one I'm clearing manually here.
I think the discrepancy between the behavior of NonLinearProgram and ConicProgram/QuadraticProgram comes from the fact that NLP does not use _fill. Checking the db that ConicProgram makes, it seems to be wrong whenever querying a sensitivity wrt a parameter that has already been queried before.
I've been working on a project with DiffOpt where I need to query the sensitivity of variables wrt parameters. I managed to make a MWE of the issue with just JuMP and DiffOpt, but I'm still not sure what the underlying problem is:
Consider the problem below where for
0 < p < 1
, there is exactly one optimal active set -> thex ≥ p
constraint(s) are active and the bounds onx
are inactive. So I expect that theW
extracted below should be exactly the identity matrix (scaled by the perturbation).Note I am using the new POI integration from #262.
For one parameter (
P = 1
), everything works as expected:But setting
P = 2
, we get:where it seems the first column is correct, but there is an extra 1 in the second column first row. With larger
P
the same pattern continues, e.g. forP=5
:Interestingly, the same script but using the new NLP backend in #260 does match what I expect! To force the use of the NLP backend, I switch to that branch and pass
with_parametric_opt_interface=false
:and the result is as expected:
I also tried switching to Clarabel and adding a conic constraint to force the
ConicProgram
backend - same output asQuadraticProgram
:ConicProgram version
cc @andrewrosemberg
The text was updated successfully, but these errors were encountered: