You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I just developed a new solver and want to claim that my solver is the best. In order to make my claim, I’d go and pick a problem class from a benchmark library (e.g., the CSPLib), says, the Car Sequencing problem. I then evaluate my new shiny solver and the other (dozens of) solvers that existed in the literature on those instances from the library. I see that my solvers show very good performance on those instances, so I just announce myself as the winner, yay! ;)
But the secret is: I’ve been using the same instances to test/evaluate my solver during the development process. So it’s very likely that my solver is biassed towards those particular instances, and its good performance doesn’t generalise to other problem classes, or even other problem instances of the same problem class. My claim is probably a lie!
In general, having access to a wide range of instances with diverse properties is always needed. They are not only useful for benchmarking among solvers and learning about their complementary strengths and weaknesses, but also to support the algorithm development process. For example, sometimes we want a bunch of small and easy instances to quickly test newly developed functionalities of our solver, and then a large set of challenging instances to identify potential weaknesses of our solver and improve it further.
Recently, in a series of work among colleagues at the CP group in St Andrews, we developed an automated instance generation tool called AutoIG. The tool is a combination of constraint modelling, constraint solving and automated algorithm configuration techniques.
If you’re interested in learning more about the tool we developed, here are:
There are a couple of limitations in the current implementation. Our plan is to rewrite the tool to make it more modular and flexible, and improve the communication between the various pieces of softwares integrated within the tool. The main programming language is Python.
The rewriting of the tool is an important step towards expanding AutoIG in several directions. There will be various research questions to explore during the expansion, and we hope you can join us in both phases!
If you have questions or want to know more details, feel free to contact me at [email protected]
The text was updated successfully, but these errors were encountered:
I just developed a new solver and want to claim that my solver is the best. In order to make my claim, I’d go and pick a problem class from a benchmark library (e.g., the CSPLib), says, the Car Sequencing problem. I then evaluate my new shiny solver and the other (dozens of) solvers that existed in the literature on those instances from the library. I see that my solvers show very good performance on those instances, so I just announce myself as the winner, yay! ;)
But the secret is: I’ve been using the same instances to test/evaluate my solver during the development process. So it’s very likely that my solver is biassed towards those particular instances, and its good performance doesn’t generalise to other problem classes, or even other problem instances of the same problem class. My claim is probably a lie!
In general, having access to a wide range of instances with diverse properties is always needed. They are not only useful for benchmarking among solvers and learning about their complementary strengths and weaknesses, but also to support the algorithm development process. For example, sometimes we want a bunch of small and easy instances to quickly test newly developed functionalities of our solver, and then a large set of challenging instances to identify potential weaknesses of our solver and improve it further.
Recently, in a series of work among colleagues at the CP group in St Andrews, we developed an automated instance generation tool called AutoIG. The tool is a combination of constraint modelling, constraint solving and automated algorithm configuration techniques.
If you’re interested in learning more about the tool we developed, here are:
There are a couple of limitations in the current implementation. Our plan is to rewrite the tool to make it more modular and flexible, and improve the communication between the various pieces of softwares integrated within the tool. The main programming language is Python.
The rewriting of the tool is an important step towards expanding AutoIG in several directions. There will be various research questions to explore during the expansion, and we hope you can join us in both phases!
If you have questions or want to know more details, feel free to contact me at [email protected]
The text was updated successfully, but these errors were encountered: