-
Notifications
You must be signed in to change notification settings - Fork 15
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
b7bc1d6
commit 69477bc
Showing
1 changed file
with
54 additions
and
0 deletions.
There are no files selected for viewing
54 changes: 54 additions & 0 deletions
54
content/talk/2024_heuristicsindustry_sun_lopezibanez_alvarez_fernandez/index.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,54 @@ | ||
--- | ||
title: "AI-powered heuristics for heavy industry mathematical optimization" | ||
author: ["Mingfei Sun"] | ||
lastmod: 2024-01-24T22:47:03+00:00 | ||
draft: false | ||
weight: 4001 | ||
event: "Projects 2024" | ||
authors: ["mingfeisun","manuellopezibanez","nicolasalvarez","silvinofernandez"] | ||
summary: "This project will explore the use of a transformer-based architecture, inspired by state-of-the-art LLMs, as a scheduling method for industrial production facilities. This architecture would learn from historical data and generate new schedules in a similar way to how they generate text, implicitly learning the scheduling rules from the data. This approach will be an alternative (or complement) optimization meta-heuristics. This project will explore cutting-edge technologies from academia but with an strong link with industry." | ||
--- | ||
|
||
The new state-of-the art Transformer Neural Networks [1] has recently revolutionized the world of generative Artificial Intelligence (AI). This technology is behind the most famous and successful Large Language Models (LLMs) such as GPT 3/4 (OpenAI) [2, 3], BERT/PaLM 2 (Google) [4, 5], LLaMA (Meta) [6] and other generative models (GM). These LLMs are the main engine of the popular well-known AI assistants like ChatGPT (OpenAI) and Bard (Google), that had demonstrated the potential of these kinds of models. GMs have been used recently to produce feasible solutions to combinatorial optimization problems [7]. | ||
|
||
Many optimization problems arising in industry are combinatorial in nature and involve sequencing, scheduling, cutting or packing. For example, the scheduling of production lines in heavy industry is a challenge faced every day in many companies and with a critical impact in their daily results. Due to its complexity and available time constraints, the use of classical mathematical techniques like integer linear programming is infeasible. Nowadays this scheduling is often solved by means of hand-crafted and carefully tuned heuristics and meta-heuristics [8], including local search and constructive heuristic methods such as GRASP. Although these optimization techniques have been used for many years, they still suffer from issues such as the difficulty to properly tune their parameters [9], the necessity to develop a tailored algorithm for each problem/project [10], very limited number of solution evaluations due to time requirements [11] and it is difficult to add learning to them [12]. It is often the case that a different instance of the same problem must be solved every week, day or hour, thus there is a large corpus of data available of previous instances and already evaluated solutions. | ||
|
||
This project proposes to explore the possibility of using a transformer-based architecture as a scheduling method for production facilities, inspired by the state-of-the-art LLMs. This architecture would learn from historical data (e.g., schedules) and would be able to generate new schedules in a similar way to how they are currently generating text, implicitly learning the scheduling rules from the data. Candidates will explore cutting-edge technologies from academia but with an strong link with industry, facing a real world problem and applying cutting-edge technologies to solve it. | ||
This project aims to adapt such LLM/GM methods to the task of generating solutions to industrial combinatorial optimization problems. The main research questions that the project aims to answer are: | ||
|
||
* How to adapt these LLM/GM methods to generate feasible high-quality solutions for complex combinatorial optimization problems arising in industry. | ||
|
||
* How to efficiently train such a system, reusing as much available data as possible and minimising the amount of new data required? | ||
|
||
* How to efficiently update the system in an online manner with the data generated by solving each new instance? | ||
|
||
* Can such an AI system outperform the state-of-the-art heuristics used in industry on its own or does it need to be complemented with other optimization methods? | ||
|
||
|
||
***References*** | ||
|
||
[1] Vaswani, A. et al (2017) Attention is all you need. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. | ||
|
||
[2] Brown, T. B. et al (2020) Language Model are Few-Shot Learners. GPT-3 Technical report, Open AI. | ||
|
||
[3] GPT-4 Technical Report, OpenAI (2023) | ||
|
||
[4] Devlin, J. e tal (2019) BERT: Pre-training of Deep Bidirectional Transformers for Language Undestanding. Google AI Language. | ||
|
||
[5] PaLM 2 Technical Report (2023), Google AI. https://doi.org/10.48550/arXiv.2305.10403 | ||
|
||
[6] LLaMA: Open and Efficient Foundation Language Model (2023), Meta AI. https://doi.org/10.48550/arXiv.2302.13971 | ||
|
||
[7] Effective Generation of Feasible Solutions for Integer Programming via Guided Diffusion, Hao Zeng, Jiaqi Wang, Avirup Das, Junying He, Kunpeng Han, Haoyuan Hu, Mingfei Sun, https://openreview.net/pdf?id=joMMM9eadc | ||
|
||
[8] Fernandez, S., Alvarez, S., Díaz, D., Iglesias, M., Ena, B. (2014). Scheduling a Galvanizing Line by Ant Colony Optimization. In: Dorigo, M., et al. Swarm Intelligence. ANTS 2014. Lecture Notes in Computer Science, vol 8667. Springer, Cham. https://doi.org/10.1007/978-3-319-09952-1_13 | ||
|
||
[9] Mudita Sharma, Alexandros Komninos, Manuel López-Ibáñez, and Dimitar Kazakov. Deep Reinforcement Learning-Based Parameter Control in Differential Evolution. In M. López-Ibáñez and A. Auger, editors, Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2019. ACM Press, New York, NY, 2019. http://doi.org/10.1145/3321707.3321813 | ||
|
||
[10] Franco Mascia, Manuel López-Ibáñez, Jérémie Dubois-Lacoste, and Thomas Stützle. Grammar-based generation of stochastic local search heuristics through automatic algorithm configuration tools. Computers & Operations Research, 51:190–199, 2014. | ||
http://doi.org/10.1016/j.cor.2014.05.020 | ||
|
||
[11] Silvino Fernández, Pablo Valledor, Diego Diaz, Eneko Malatsetxebarria, and Miguel Iglesias. 2016. Criticality of Response Time in the usage of Metaheuristics in Industry. In Proceedings of the 2016 on Genetic and Evolutionary Computation Conference Companion (GECCO '16 Companion). Association for Computing Machinery, New York, NY, USA, 937–940. https://doi.org/10.1145/2908961.2931649 | ||
|
||
[12] Laurens Bliek, Paulo da Costa, Reza Refaei Afshar, Robbert Reijnen, Yingqian Zhang, Tom Catshoek, Daniël Vos, Sicco Verwer, Fynn Schmitt-Ulms, André Hottung, Tapan Shah, Meinolf Sellmann, Kevin Tierney, Carl Perreault-Lafleur, Caroline Leboeuf, Federico Bobbio, Justine Pepin, Warley Almeida Silva, Ricardo Gama, Hugo L. Fernandes, Martin Zaefferer, Manuel López-Ibáñez, and Ekhine Irurozki. The First AI4TSP Competition: Learning to Solve Stochastic Routing Problems. Artificial Intelligence, 319:103918, 2023. | ||
http://doi.org/10.1016/j.artint.2023.103918 |