Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restoration failed when using MadNLP for ODE parameter estimation #264

Open
zornsllama opened this issue Jun 10, 2023 · 2 comments
Open

Comments

@zornsllama
Copy link

I'm not sure if this is the best place to ask this question, so please let me know if there's a more appropriate forum!

I have been performing ODE parameter estimation (fitting to experimentally observed data) using NLPModelsIpopt and recently tested MadNLP on the same problems. For moderately sized (90k variables, 800k Hessian nonzeros) test problems with noiseless synthetic data, MadNLP seemed to work well and exhibited reduced compute time relative to Ipopt (using the same linear solver Ma97), so I was hoping to transfer my workflow to MadNLP entirely. However, upon testing MadNLP with larger problems (180k variables, 1.6m Hessian nonzeros), or problems incorporating noisy data (in both the synthetic data + artificial noise and real experimental data cases), the solver either hits max iterations or shows a behavior where the problem appears to be solving well for several hundred steps before suddenly going into restoration and then failing. In contrast, these same problems (and much larger ones) solve to optimality in Ipopt without issue (using the same underlying NLPModel).

I'm curious what's going on, and I am also wondering if there are optimizer options I could modify that might help the issue. Would anyone here be able to point me in a direction to debug this problem?

@sshin23
Copy link
Member

sshin23 commented Jun 12, 2023

Hey, @zornsllama, thanks for reporting this.

This type of convergence issue is difficult to debug, but if you could provide a simple example to reproduce this, it would be helpful for us to improve the convergence behavior of MadNLP

@zornsllama
Copy link
Author

zornsllama commented Jun 15, 2023

Hi @sshin23. I have attached a zip file containing an example problem -- the script examples/hh_sde_example.jl generates an NLPProblem called vap, which can then be passed to either NLPModelsIpopt or MadNLP. Example logs from both are also contained in the folder examples; you can see that Ipopt solves rather quickly, while MadNLP goes into restoration, and unfortunately fails. The example problem is to perform parameter estimation for a 4D Hodgkin-Huxley equation, discretized via Simpson-Hermite transcription. It is similar to the approach described in your paper here, but incorporating the synchronization-based control given in this paper to regularize convergence. The code in src computes the necessary derivatives using SymPy (this is probably not ideal). Please let me know if you have any questions -- thanks!

example.zip

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants