-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trying out the box cone API #63
Comments
Performance of quadratic programming with the box cone versus positive coneHere is a comparison of performance on my machine for:
Code in the corresponding PR qpsolvers/qpsolvers#67
On the random dense benchmark the box API does not seem to scale as well though: Here From these observations only, it seems using the box cone performs better than stacking in the positive cone on sparse, but worse on dense problems. |
Hi @stephane-caron , thanks for the feedback, this is very useful! A couple of comments:
I'm a little surprised that the box cone is slower. Is it easy for me to run these examples? It could be that the cone projection (which is more expensive for the box cone because there's a 1d Newton method happening) is dominating for small random examples. Overall the box cone should be slightly faster because the data does not be to be replicated as it does when just using the positive cone (don't need to stack |
Thanks for your feedback! I've removed the extra
Sure, you should be able to just clone the branch and test locally: git clone https://github.com/stephane-caron/qpsolvers.git -b feature/scs_box_cone
cd qpsolvers/examples
ln -s ../qpsolvers ./ # testing locally
ipython -i benchmark_dense_problem.py All benchmark examples, except the model predictive control one, are set to compare "scs" and "scs_box" in this branch, so you can |
Thanks @stephane-caron , I was able to run and reproduce your results. I think the issue here is that these random problems are quite simple and nicely-conditioned, and the fact that they are dense. |
Okay. Let me know if you know other standard/representative problems (or even better a family of problems parameterized by e.g. size) that we could add to the QP benchmark. @bodono As you see no problem here I believe we can move forward and make the box cone API the default way to use SCS in qpsolvers. Chime in if you think otherwise. |
The Maros-Meszaros testset of QPs is what most people use and is a pretty good test set. It formulates QPs as: |
Thank you! I'll check it out in a separate PR. I have another question: what is the best way to handle one bound and not the other, e.g. when
For now I went for: cone["bl"] = lb if lb is not None else np.full((n,), -np.inf)
cone["bu"] = ub if ub is not None else np.full((n,), +np.inf) If that's proper we can merge https://github.com/stephane-caron/qpsolvers/pull/67/files to push the box cone API to qpsolvers. |
Yes you can do that, or probably easier is to just use the standard linear cone when only one side of the box is specified. |
OK, thank you for your feedback in this issue 😃 The box cone API is now rolled out in qpsolvers v2.1.0. |
@bodono FYI I've started working on it in a QP solvers benchmark: https://github.com/stephane-caron/qpsolvers_benchmark/blob/main/results/maros_meszaros.md similar to the ones from OSQP and ProxQP. SCS 3.2.0 is part of the tested solvers and if you'd like you can check preliminary results in that report (or run the benchmark on your machine). The methodology is also an open work-in-progress, e.g. what's a good metric to compare overall solver primal accuracies? Looking for wisdom 😃 |
How do I run the benchmark? I don't see the |
There are a lot of dependencies just to run SCS on this. I have already ran into needing to install |
Sorry the README was outdated (the code is under dev, things still change quite fast). You can do: python benchmark.py run maros_meszaros --solver scs --settings default
No we don't. I'll make them optional. I don't see how you ran into a highspy import with the current |
From my run yesterday:
|
Thank you 😃 it's fixed now qpsolvers/qpbenchmark#4 |
I only have SCS and OSQP installed on this machine, so I only ran those two. Even still, I'm seeing some strange stuff about tolerances. It looks like OSQP default settimgs is
Obviously this is going to make SCS take longer and therefore look worse. You have to be very careful about this type of thing when comparing solvers. It's easy to make one solver look better than another by accidentally asking one to solve to 1000x the accuracy of the other. The other thing to know is that OSQP does not constrain the duality gap which can make it artificially faster, but in reality it's returning false solutions. I discuss this in a paper: Another point is how to restore from the box cone formulation to the original problem when running SCS. This is a minor point, but I wonder if you are dividing by the |
This is a big issue, I'll address it with high prio, first in qpsolvers/qpsolvers#102 so that all solvers use their default tolerance settings (left to solver developers). (The override of SCS's default tolerances comes from qpsolvers/qpsolvers#50 (comment); I'll find some other way to address this issue that keeps SCS's defaults.) The follow-up question for the benchmark is that some solvers could appear faster by having laxer default tolerances. That is why benchmark results also include a shifted geometric mean of the primal residual at the returned solution, so that such solvers would appear both faster and less precise.
Interesting! Let's take a look at it → qpsolvers/qpsolvers#103 |
Oh, I see, I thought this project set the defaults. You're saying the defaults are coming from the qpsolvers package, which are tuned in some sense for some other outcome. I guess it's up to you whether the point of this project is to make a fair comparison of these solvers in a head-to-head with equalised settings (which is what I was assuming), or if it's more about tuning the qpsolvers package and the results are more indicative of their performance within that package. I'm not sure what's going on with that primal residual because some of them seem enormous. |
The goal of the benchmark is to compare solvers through at least two lenses:
It is totally open to other variants, e.g. having non-zero
Not really, qpsolvers should just be a gateway. It has deviated from that ideal, but it will come back on track.
This could be due to how qpsolvers handles |
Confirmed. The problems that had cost/primal errors through the roof were:
I checked them manually and they were all
Now the benchmark is updated to consider
(I'll re-run the full Maros-Meszaros as well.) |
That makes sense. Are these still with 1e-7 accuracy? That's probably a bit out of the range where SCS is best. By default SCS uses 1e-4 for both eps_abs and eps_rel. |
SCS is now at its 1e-4 defaults for both
It is now back on track, i.e. all defaults are left to the solvers. I've re-run the Maros-Meszaros test set yesterday, here is the resulting report. The success rate for SCS 3.2.2 is around 33% with its default settings (and there is indeed a drop to 21% with the high_accuracy settings). |
Looking at the instances you posted, I see for instance the first entry is
but in the results spreadsheet I have:
And I notice that the true objective value is listed as (FYI: in my csv the |
Oh my, sorry for this 🙈 It is indeed the negation of the second clause, i.e. the table lists percentage of instances where (1) the solver said it found the solution and (2) the solution passed tolerance checks. → Updated report
So, for instance, CVXOPT is always right when it says it finds a solution with its default settings. But it only finds solutions 16% of the time. |
I'm totally in favor of implementing 2 down the line. The reason why it won't be implemented in this first release of the benchmark stems from a quick estimate of (1) the workload to wire dual multipliers for all backend solvers 😅 and (2) the time I can spend on this in the upcoming weeks. |
Agreed. Those are just initial loose values to check that everything works. I will identify tighter values for the cost and primal tolerances before the initial release of the benchmark. → https://github.com/stephane-caron/qpsolvers_benchmark/issues/15 |
Yes 👍 For the Maros-Meszaros test set all true objectives are This isn't entirely satisfactory though, because on some problems in different test sets I've added this point to the design discussion here: qpsolvers/qpbenchmark#18. |
Thanks a lot @bodono for your feedback 😃 I will check your results and the performance on DUALC1 in qpsolvers/qpbenchmark#19. |
This is not an issue, just my feedback while trying out the box cone API in quadratic programming.
Context: suggestion from @bodono at #36 (comment)
Following this comment, I end up with this code:
Empirically it yields the correct results.
Curiosity about the API
The API surprised me in two ways while implementing this:
cone
? At first, I wrongly assumed they would go todata
.cone["bsize"]
? This is redundant this SCS can figure it out from "bl", "bu" if present.Suggestions for the doc
This was useful to me but does not appear in Cones. It should probably be mentioned there for future travelers.
Now I'm deciding
cone["bsize"]
. The doc says:What I understand is that bsize is the length of
[t;s]
, but I wasn't sure at first. For zero ambiguity, the doc could e.g. mention thatbsize
is k + 1, referring to the formula to the left.The text was updated successfully, but these errors were encountered: