You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I am a RL researcher, and my team and I have recently implemented HIRO (Data Efficient Hierarchical Reinforcement Learning with Off-Policy Correction) with PFRL. I'm wondering if a PR of an HRL algorithm (which required some large changes) would be encouraged on this platform.
Thanks!
The text was updated successfully, but these errors were encountered:
Hi, the developer team thinks it is possible to merge such a new algorithm PR, and we would really appreciate such a contribution! To see how easy a specific PR could be merged beforehand, can you let us know what your PR would look like, especially in the following aspects?
what kind of changes the PR would make e.g.
how "large" is it?
could it affect other algorithms?
could it break backward compatibility of some API?
how the implementation has been verified e.g.
is there any significant performance gap between the official HIRO implementation and yours?
Hello, I am a RL researcher, and my team and I have recently implemented HIRO (Data Efficient Hierarchical Reinforcement Learning with Off-Policy Correction) with PFRL. I'm wondering if a PR of an HRL algorithm (which required some large changes) would be encouraged on this platform.
Thanks!
The text was updated successfully, but these errors were encountered: