diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/404.html b/404.html new file mode 100644 index 00000000..6754fa8d --- /dev/null +++ b/404.html @@ -0,0 +1,2309 @@ + + + +
+ + + + + + + + + + + + + + + + ++ Documentation | + Getting Started | + Usage | + Contributing | + Paper | + Join Us +
+ + + ++ Documentation | + Getting Started | + Usage | + Contributing | + Paper | + Join Us +
+ + + +An extensive Reinforcement Learning (RL) for Combinatorial Optimization (CO) benchmark. Our goal is to provide a unified framework for RL-based CO algorithms, and to facilitate reproducible research in this field, decoupling the science from the engineering.
+RL4CO is built upon:
+We offer flexible and efficient implementations of the following policies:
+We provide several utilities and modularization. For example, we modularize reusable components such as environment embeddings that can easily be swapped to solve new problems.
+RL4CO is now available for installation on pip
!
+
pip install rl4co
+
To get started, we recommend checking out our quickstart notebook or the minimalistic example below.
+This command installs the bleeding edge main
version, useful for staying up-to-date with the latest developments - for instance, if a bug has been fixed since the last official release but a new release hasn’t been rolled out yet:
pip install -U git+https://github.com/ai4co/rl4co.git
+
If you want to develop RL4CO we recommend you to install it locally with pip
in editable mode:
git clone https://github.com/ai4co/rl4co && cd rl4co
+pip install -e .
+
We recommend using a virtual environment such as conda
to install rl4co
locally.
Train model with default configuration (AM on TSP environment): +
python run.py
+
Tip
+You may check out this notebook to get started with Hydra!
+python run.py experiment=routing/am env=tsp env.num_loc=50 model.optimizer_kwargs.lr=2e-4
+
python run.py experiment=routing/am logger=none '~callbacks.learning_rate_monitor'
+
python run.py -m experiment=routing/am model.optimizer.lr=1e-3,1e-4,1e-5
+
Here is a minimalistic example training the Attention Model with greedy rollout baseline on TSP in less than 30 lines of code:
+from rl4co.envs.routing import TSPEnv, TSPGenerator
+from rl4co.models import AttentionModelPolicy, POMO
+from rl4co.utils import RL4COTrainer
+
+# Instantiate generator and environment
+generator = TSPGenerator(num_loc=50, loc_distribution="uniform")
+env = TSPEnv(generator)
+
+# Create policy and RL model
+policy = AttentionModelPolicy(env_name=env.name, num_encoder_layers=6)
+model = POMO(env, policy, batch_size=64, optimizer_kwargs={"lr": 1e-4})
+
+# Instantiate Trainer and fit
+trainer = RL4COTrainer(max_epochs=10, accelerator="gpu", precision="16-mixed")
+trainer.fit(model)
+
Other examples can be found on the documentation!
+Run tests with pytest
from the root directory:
pytest tests
+
Installing PyG
via Conda
seems to update Torch itself. We have found that this update introduces some bugs with torchrl
. At this moment, we recommend installing PyG
with Pip
:
+
pip install torch_geometric
+
Have a suggestion, request, or found a bug? Feel free to open an issue or submit a pull request. +If you would like to contribute, please check out our contribution guidelines here. We welcome and look forward to all contributions to RL4CO!
+We are also on Slack if you have any questions or would like to discuss RL4CO with us. We are open to collaborations and would love to hear from you 🚀
+If you find RL4CO valuable for your research or applied projects:
+@article{berto2024rl4co,
+ title={{RL4CO: an Extensive Reinforcement Learning for Combinatorial Optimization Benchmark}},
+ author={Federico Berto and Chuanbo Hua and Junyoung Park and Laurin Luttmann and Yining Ma and Fanchen Bu and Jiarui Wang and Haoran Ye and Minsu Kim and Sanghyeok Choi and Nayeli Gast Zepeda and Andr\'e Hottung and Jianan Zhou and Jieyi Bi and Yu Hu and Fei Liu and Hyeonah Kim and Jiwoo Son and Haeyeon Kim and Davide Angioni and Wouter Kool and Zhiguang Cao and Jie Zhang and Kijung Shin and Cathy Wu and Sungsoo Ahn and Guojie Song and Changhyun Kwon and Lin Xie and Jinkyoo Park},
+ year={2024},
+ journal={arXiv preprint arXiv:2306.17100},
+ note={\url{https://github.com/ai4co/rl4co}}
+}
+
Note that a previous version of RL4CO has been accepted as an oral presentation at the NeurIPS 2023 GLFrontiers Workshop. Since then, the library has greatly evolved and improved!
+We invite you to join our AI4CO community, an open research group in Artificial Intelligence (AI) for Combinatorial Optimization (CO)!
+