-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Code for running GLOP #222
Comments
Hi @ujjwaldasari10, we did not merge the code for GLOP yet since it is not refactored so far |
@fedebotu, @cbhua I am adding a couple of more doubts here. 1) Is it possible to save a trained EAS model and apply the model to a newer test instance other than the one it was saved on? 2) In the original DEEPACO paper the models are run for some 'evaluations' and not epochs. Can you please let me know the equivalent implementation in RL4CO since it's not trivial to me, going through the code base? Thanks |
Hi @DasariShreeUjjwal 👋🏻, thank you for your interest in our work!
The "# of evaluations" in the DeepACO paper refers to the number of evaluating the performance (route length) of solutions during an ACO run on a single problem instance. However, the "epochs" in rl4co is the number of total iterations for the whole training loop, which is more similar to the Total training instances in Table 7. This can be expressed as For the former concept, we have from rl4co.models import DeepACO
model = DeepACO(
...,
policy_kwargs = dict(n_ants=20, n_iterations=10)
) |
BTW, I'm currently working on reimplementing GLOP in RL4CO, and it's almost complete. You might check the codebase here: https://github.com/Furffico/rl4co/tree/dev-glop/rl4co/models/zoo/glop |
Can you please give an example code for training and inference of GLOP for lets say TSP of size 10K?
The text was updated successfully, but these errors were encountered: