Operations Research (OR) is vital for decision-making in many industries. While recent OR methods have seen significant improvements in automation and efficiency through integrating Large Language Models (LLMs), they still struggle to produce meaningful explanations. This lack of clarity raises concerns about transparency and trustworthiness in OR applications. To address these challenges, we propose a comprehensive framework, Explainable Operations Research (EOR), emphasizing actionable and understandable explanations accompanying optimization. The core of EOR is the concept of \textit{Decision Information}, which emerges from what-if analysis and focuses on evaluating the impact of complex constraints (or parameters) changes on decision-making. Specifically, we utilize bipartite graphs to quantify the changes in the OR model and adopt LLMs to improve the explanation capabilities. Additionally, we introduce the first industrial benchmark to rigorously evaluate the effectiveness of explanations and analyses in OR, establishing a new standard for transparency and clarity in the field.
- Install and setup Python Packages.
- Install and setup "Gurobi Optimizer" from Gurobi's official website.
- Setup your own api_key in the
OAI_CONFIG_LIST
file.
The examples for the problems on the 1-shot setting are in the examples
folder.
-
Option 1: To run the 0/1 shot one by one, run:
python run-0shot.py
orpython run-1shot.py
-
Option 2: To run the 0/1 shot for all problems, run:
python run_all_problems-0shot.py
orpython run_all_problems-1shot.py
We provide the 30 problems and 10 queries for each problem in the benchmark
folder and the true labels in the True-labels
folder.
We provide the baseline code (OptiGuide and Standard) in the baselines
folder.
Please cite our paper if you use this code in your work:
@article{zhang2025,
title={Decision Information Meets Large Language Models: The Future of Explainable Operations Research},
author={Zhang, Yansen and Kang, Qingcan, and Yu, Wing Yin and Gong, Hailei, and Fu, Xiaojin and Han, Xiongwei and Zhong, Tao and Ma, Chen},
journal={ICLR},
year={2025}
}
Thanks for this repos when developing this one: