Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Update plasmo interface #237

Draft
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

jalving
Copy link
Contributor

@jalving jalving commented Nov 10, 2022

I have been working on updating MadNLPGraph to use the latest MadNLP v0.5 and Plasmo.jl v0.5. I was able to get a standard problem to solve with MadNLPGraph, but I can't figure out how to update the Schur and Schwarz linear solvers to make them work.

The following standard solve works.

using Plasmo
using MadNLP
using MadNLPGraph

graph = OptiGraph()
@optinode(graph,nodes[1:4])
for node in nodes
    @variable(node,x>=0)
    @variable(node,y>=0)
    @constraint(node,x + y >= 3)
    @NLconstraint(node,x^3 >= 1)
end
@objective(graph,Min,sum(n[:x] for n in nodes))
@linkconstraint(graph,sum(node[:y] for node in nodes) == 10)

# works
MadNLPGraph.optimize!(graph)

But this does not work:

# doesn't work
MadNLPGraph.optimize!(graph; linear_solver=MadNLPSchur, schur_custom_partition=true)
ERROR: MethodError: Cannot `convert` an object of type 
  Module to an object of type 
  Type
Closest candidates are:
  convert(::Type{Type}, ::Type) at essentials.jl:206
  convert(::Type{T}, ::T) where T at essentials.jl:205
Stacktrace:
 [1] MadNLPOptions(rethrow_error::Bool, disable_garbage_collector::Bool, blas_num_threads::Int64, linear_solver::Module, iterator::Type, output_file::String, print_level::MadNLP.LogLevels, file_print_level::MadNLP.LogLevels, tol::Float64, acceptable_tol::Float64, acceptable_iter::Int64, diverging_iterates_tol::Float64, max_iter::Int64, max_wall_time::Float64, s_max::Float64, kappa_d::Float64, fixed_variable_treatment::MadNLP.FixedVariableTreatments, jacobian_constant::Bool, hessian_constant::Bool, kkt_system::MadNLP.KKTLinearSystem, dual_initialized::Bool, inertia_correction_method::MadNLP.InertiaCorrectionMethod, constr_mult_init_max::Float64, bound_push::Float64, bound_fac::Float64, nlp_scaling::Bool, nlp_scaling_max_gradient::Float64, inertia_free_tol::Float64, min_hessian_perturbation::Float64, first_hessian_perturbation::Float64, max_hessian_perturbation::Float64, perturb_inc_fact_first::Float64, perturb_inc_fact::Float64, perturb_dec_fact::Float64, jacobian_regularization_exponent::Float64, jacobian_regularization_value::Float64, soft_resto_pderror_reduction_factor::Float64, required_infeasibility_reduction::Float64, obj_max_inc::Float64, kappha_soc::Float64, max_soc::Int64, alpha_min_frac::Float64, s_theta::Float64, s_phi::Float64, eta_phi::Float64, kappa_soc::Float64, gamma_theta::Float64, gamma_phi::Float64, delta::Int64, kappa_sigma::Float64, barrier_tol_factor::Float64, rho::Float64, mu_init::Float64, mu_min::Float64, mu_superlinear_decrease_power::Float64, tau_min::Float64, mu_linear_decrease_factor::Float64)
   @ MadNLP ~/.julia/packages/MadNLP/dL4EA/src/options.jl:21
 [2] MadNLPOptions(; rethrow_error::Bool, disable_garbage_collector::Bool, blas_num_threads::Int64, linear_solver::Module, iterator::Type, output_file::String, print_level::MadNLP.LogLevels, file_print_level::MadNLP.LogLevels, tol::Float64, acceptable_tol::Float64, acceptable_iter::Int64, diverging_iterates_tol::Float64, max_iter::Int64, max_wall_time::Float64, s_max::Float64, kappa_d::Float64, fixed_variable_treatment::MadNLP.FixedVariableTreatments, jacobian_constant::Bool, hessian_constant::Bool, kkt_system::MadNLP.KKTLinearSystem, dual_initialized::Bool, inertia_correction_method::MadNLP.InertiaCorrectionMethod, constr_mult_init_max::Float64, bound_push::Float64, bound_fac::Float64, nlp_scaling::Bool, nlp_scaling_max_gradient::Float64, inertia_free_tol::Float64, min_hessian_perturbation::Float64, first_hessian_perturbation::Float64, max_hessian_perturbation::Float64, perturb_inc_fact_first::Float64, perturb_inc_fact::Float64, perturb_dec_fact::Float64, jacobian_regularization_exponent::Float64, jacobian_regularization_value::Float64, soft_resto_pderror_reduction_factor::Float64, required_infeasibility_reduction::Float64, obj_max_inc::Float64, kappha_soc::Float64, max_soc::Int64, alpha_min_frac::Float64, s_theta::Float64, s_phi::Float64, eta_phi::Float64, kappa_soc::Float64, gamma_theta::Float64, gamma_phi::Float64, delta::Int64, kappa_sigma::Float64, barrier_tol_factor::Float64, rho::Float64, mu_init::Float64, mu_min::Float64, mu_superlinear_decrease_power::Float64, tau_min::Float64, mu_linear_decrease_factor::Float64)
   @ MadNLP ./util.jl:450
 [3] load_options(; linear_solver::Module, options::Base.Iterators.Pairs{Symbol, Any, NTuple{5, Symbol}, NamedTuple{(:schur_custom_partition, :schur_num_parts, :schur_part, :hessian_constant, :jacobian_constant), Tuple{Bool, Int64, Vector{Int64}, Bool, Bool}}})
   @ MadNLP ~/.julia/packages/MadNLP/dL4EA/src/options.jl:112
 [4] MadNLPSolver(nlp::MadNLPGraph.GraphModel{Float64}; kwargs::Base.Iterators.Pairs{Symbol, Any, NTuple{6, Symbol}, NamedTuple{(:schur_custom_partition, :schur_num_parts, :linear_solver, :schur_part, :hessian_constant, :jacobian_constant), Tuple{Bool, Int64, Module, Vector{Int64}, Bool, Bool}}})
   @ MadNLP ~/.julia/packages/MadNLP/dL4EA/src/IPM/IPM.jl:99
 [5] optimize!(graph::OptiGraph; kwargs::Base.Iterators.Pairs{Symbol, Any, Tuple{Symbol, Symbol}, NamedTuple{(:linear_solver, :schur_custom_partition), Tuple{Module, Bool}}})
   @ MadNLPGraph ~/.julia/dev/MadNLP/lib/MadNLPGraph/src/plasmo_interface.jl:541
 [6] top-level scope
   @ REPL[269]:1

I think MadNLP no longer supports providing linear solvers as modules.
However this also does not work:

# doesn't work
MadNLPGraph.optimize!(graph; linear_solver=MadNLPGraph.MadNLPSchur.Solver,  schur_custom_partition=true)
ERROR: MethodError: no method matching input_type(::Type{MadNLPGraph.MadNLPSchur.Solver})
Closest candidates are:
  input_type(::Type{LapackCPUSolver}) at /home/jordan/.julia/packages/MadNLP/dL4EA/src/LinearSolvers/lapack.jl:208
  input_type(::Type{UmfpackSolver}) at /home/jordan/.julia/packages/MadNLP/dL4EA/src/LinearSolvers/umfpack.jl:98
Stacktrace:
 [1] check_option_sanity(options::MadNLPOptions)
   @ MadNLP ~/.julia/packages/MadNLP/dL4EA/src/options.jl:97
 [2] load_options(; linear_solver::Type, options::Base.Iterators.Pairs{Symbol, Bool, Tuple{Symbol, Symbol}, NamedTuple{(:hessian_constant, :jacobian_constant), Tuple{Bool, Bool}}})
   @ MadNLP ~/.julia/packages/MadNLP/dL4EA/src/options.jl:114
 [3] MadNLPSolver(nlp::MadNLPGraph.GraphModel{Float64}; kwargs::Base.Iterators.Pairs{Symbol, Any, Tuple{Symbol, Symbol, Symbol}, NamedTuple{(:linear_solver, :hessian_constant, :jacobian_constant), Tuple{UnionAll, Bool, Bool}}})
   @ MadNLP ~/.julia/packages/MadNLP/dL4EA/src/IPM/IPM.jl:99
 [4] optimize!(graph::OptiGraph; kwargs::Base.Iterators.Pairs{Symbol, UnionAll, Tuple{Symbol}, NamedTuple{(:linear_solver,), Tuple{UnionAll}}})
   @ MadNLPGraph ~/.julia/dev/MadNLP/lib/MadNLPGraph/src/plasmo_interface.jl:541
 [5] top-level scope
   @ REPL[250]:1

I tried removing the option_dict from the Schur solver since I think MadNLP uses only kwargs... now, but still same error.

@frapac or @sshin23 do you have any ideas where I should look? I assume the way we specify the linear solver has changed?

@codecov-commenter
Copy link

codecov-commenter commented Nov 10, 2022

Codecov Report

Merging #237 (8f6a054) into master (15c227e) will decrease coverage by 0.90%.
The diff coverage is 0.00%.

@@            Coverage Diff             @@
##           master     #237      +/-   ##
==========================================
- Coverage   74.03%   73.13%   -0.91%     
==========================================
  Files          38       38              
  Lines        3871     3919      +48     
==========================================
  Hits         2866     2866              
- Misses       1005     1053      +48     
Impacted Files Coverage Δ
lib/MadNLPGraph/src/graphtools.jl 0.00% <0.00%> (ø)
lib/MadNLPGraph/src/plasmo_interface.jl 0.00% <0.00%> (ø)
lib/MadNLPGraph/src/schur.jl 0.00% <0.00%> (ø)
lib/MadNLPGraph/src/schwarz.jl 0.00% <0.00%> (ø)

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@etatara
Copy link

etatara commented Mar 10, 2023

I've found in my project using [email protected], I don't even need to use MadNLPGraph. Simply using:

JuMP.set_optimizer(graph,MadNLP.Optimizer)
JuMP.optimize!(graph)

works as expected. I found this out because the MadNLPGraph compat restricts Plasmo.jl to "~0.3, ~0.4" so I can't use it with Plasmo.jl 0.5.

@jalving
Copy link
Contributor Author

jalving commented Mar 16, 2023

That is correct; Plasmo.jl will interface with MadNLP the same way JuMP would when called as you show.

This PR was intended to get the Schur/Schwarz linear solvers working with the newest Plasmo.jl and MadNLP versions. There is some ongoing work towards a block-based solver interface (not yet public) that Plasmo.jl could communicate through. More than likely we will obsolete this when that becomes functional.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants