Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestion: Request for RLLIB Support in MANISKILL #727

Open
fengjungui opened this issue Nov 30, 2024 · 1 comment
Open

Suggestion: Request for RLLIB Support in MANISKILL #727

fengjungui opened this issue Nov 30, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@fengjungui
Copy link

Suggestion: Request for RLLIB Support in MANISKILL

Dear MANISKILL Team,

Hello!

First, I would like to express my gratitude for the incredible work you have done in advancing robotic reinforcement learning and skill libraries. MANISKILL has provided powerful tools that greatly contribute to the development of robotics technologies. I would like to make a suggestion, hoping that future versions of MANISKILL will include support for the RLLIB library, which would further enhance its flexibility and applicability in reinforcement learning tasks.

Background:

Currently, many reinforcement learning researchers and practitioners rely on RLLIB (https://github.com/ray-project/ray/tree/master/rllib) as a standard library to implement various RL algorithms. Built on the Ray distributed framework, RLLIB offers excellent scalability and flexibility, especially in large-scale training and multi-task parallel processing. This library has already demonstrated remarkable performance in multiple areas, such as multi-agent learning and distributed training.

Reasons for Request:

Compatibility and Extensibility: RLLIB is already widely used and compatible with many reinforcement learning frameworks. Integrating RLLIB support into MANISKILL would allow users to seamlessly leverage RLLIB’s wide range of RL algorithms and tools. This is particularly beneficial for multi-task training and fine-tuning, significantly improving efficiency.

Simplified User Workflow: Many developers are already familiar with RLLIB’s API and functionalities. By adding RLLIB support, users can continue using their existing RLLIB code in the MANISKILL environment, reducing the learning curve and accelerating the development process.

Improved Research Efficiency: RLLIB provides a variety of advanced features, such as policy gradient methods, PPO, DQN, A3C, and more. Native support for these algorithms would greatly enhance performance in robotic skill training, especially for tasks involving large-scale training and high-dimensional action spaces.

Multi-platform Support: The distributed capabilities of RLLIB and its efficient hardware resource management (including CPU, GPU, etc.) would complement MANISKILL's platform, helping to better utilize hardware resources and speed up training.

@StoneT2000
Copy link
Member

Happy to support RLLIB although I myself may not have time myself to look deeply into this for now. Happy to accept a pull request / provide suggestions on how to do it though.

@StoneT2000 StoneT2000 added the enhancement New feature or request label Dec 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants