Using MJX to accelerate humanoid standing-balance planning with GPU #2285
-
IntroDear mujoco enthousiasts, I'm a MSc-student Computer Science at the Delft University of Technology. Although I understand a lot of what's happening inside of mujoco, I don't have a ton of experience with it and I have an interesting problem to solve. My setupCurrently, I'm running MuJoCo 3.1.4 in C++ on Ubuntu 22.04 and I'm working on a humanoid model. The goal is to let the humanoid perform a self-balanced stand through a customized residual that's based on, among other things, Center of Mass (CoM), joint velocity and amount of joint control input. The balanced stand runs entirely on the CPU and my goal is to move the planning part of this simulation to the GPU using MJX. Aside from alleviating the load on the CPU while running multiple planning simulations simultaneously, I'm hoping to be able to run (a lot) more of these simultaneous sampling simulations on the GPU to improve the planner's balancing performance. My questionMy question boils down to this: How do I use MJX in a C++ MuJoCo environment? On this page, I could only find installation and coding instructions for python. How do I install and build MJX and include it in this C++ project? Minimal model and/or code that explain my questionNo response Confirmations
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
As the docs explain, MJX is a reimplementation of MuJoCo's simulation pipeline in Python using JAX, so you cannot use it from C++. Your best option is probably to port your environment to Python with the official bindings. |
Beta Was this translation helpful? Give feedback.
As the docs explain, MJX is a reimplementation of MuJoCo's simulation pipeline in Python using JAX, so you cannot use it from C++. Your best option is probably to port your environment to Python with the official bindings.