Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automated Instance Generation #19

Open
ndangtt opened this issue Sep 18, 2024 · 0 comments
Open

Automated Instance Generation #19

ndangtt opened this issue Sep 18, 2024 · 0 comments

Comments

@ndangtt
Copy link

ndangtt commented Sep 18, 2024

I just developed a new solver and want to claim that my solver is the best. In order to make my claim, I’d go and pick a problem class from a benchmark library (e.g., the CSPLib), says, the Car Sequencing problem. I then evaluate my new shiny solver and the other (dozens of) solvers that existed in the literature on those instances from the library. I see that my solvers show very good performance on those instances, so I just announce myself as the winner, yay! ;)

But the secret is: I’ve been using the same instances to test/evaluate my solver during the development process. So it’s very likely that my solver is biassed towards those particular instances, and its good performance doesn’t generalise to other problem classes, or even other problem instances of the same problem class. My claim is probably a lie!

In general, having access to a wide range of instances with diverse properties is always needed. They are not only useful for benchmarking among solvers and learning about their complementary strengths and weaknesses, but also to support the algorithm development process. For example, sometimes we want a bunch of small and easy instances to quickly test newly developed functionalities of our solver, and then a large set of challenging instances to identify potential weaknesses of our solver and improve it further.

Recently, in a series of work among colleagues at the CP group in St Andrews, we developed an automated instance generation tool called AutoIG. The tool is a combination of constraint modelling, constraint solving and automated algorithm configuration techniques.

If you’re interested in learning more about the tool we developed, here are:

  • a poster for a brief overview
  • slides for more details.
  • and our latest paper for even more details.

There are a couple of limitations in the current implementation. Our plan is to rewrite the tool to make it more modular and flexible, and improve the communication between the various pieces of softwares integrated within the tool. The main programming language is Python.

The rewriting of the tool is an important step towards expanding AutoIG in several directions. There will be various research questions to explore during the expansion, and we hope you can join us in both phases!

If you have questions or want to know more details, feel free to contact me at [email protected]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant