How to Install and Use Unitree RL Gym for Robotics Training ๐
Saturday, Dec 21, 2024 | 6 minute read
Revolutionize robotics with a cutting-edge, open-source reinforcement learning platform! ๐ค Perfect for experimenting with multiple advanced robots, it boosts AI autonomy and intelligent technologies while enabling seamless interactions and training setups. ๐๐ก
“In this rapidly evolving technological era,๐คฉ robotics has gradually permeated various industries, from manufacturing to healthcare, from autonomous driving to home assistantsโthere’s no shortage of use cases for robots! As artificial intelligence technologies continue to advance, empowering robots with the ability to learn autonomously has become a pressing problem that needs solving.”
1. Overview of Unitree RL GYM โ A Reinforcement Learning Robot Platform ๐ค
Unitree RL GYM is not just any ordinary platform; it is purpose-built for reinforcement learning and is super cool as an open-source platform! ๐ The platform aims to leverage multiple Unitree robots, such as Go2, H1, H1_2, and G1, to research and enhance reinforcement learning algorithms. ๐ It provides an ideal experimental environment for researchers and developers, contributing significantly to the advancement of robot autonomy and intelligent technologies. ๐ก Let’s explore these possibilities together across diverse experimental setups!
2. Installing and Configuring Unitree RL GYM ๐ง
Before we start training, we need to prepare the Unitree RL GYM environment through a few simple steps. ๐ Donโt worry, weโll guide you through the installation process step by step!
2.1 Create a Python Virtual Environment ๐
First, use Python 3.8 to create a new virtual environment, which can help you easily resolve dependency conflicts. ๐ Type the following command in your terminal:
python3.8 -m venv unitree_rl_gym_env
๐ And just like that, you have created a virtual environment named unitree_rl_gym_env
! Youโll perform all future installations and configurations within this environment! Next, activate it with the following command:
source unitree_rl_gym_env/bin/activate
๐ With this small step, youโll ensure that all packages are installed within this environment!
2.2 Install Pytorch ๐ณ
Next, let’s install the deep learning framework Pytorch! ๐ Enter the following command in your terminal:
pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu121
โจ This command ensures you get a specific version of Pytorch that supports CUDA 12.1! If your system does not support CUDA, simply remove the --index-url
part, and it will install the CPU version of Pytorch instead!
2.3 Install Isaac Gym ๐ฎ
Next, install a crucial physics simulation tool โ Isaac Gym! You can download Isaac Gym Preview 4 from NVIDIAโs official website. Once downloaded, navigate to the Isaac Gym Python directory and run:
cd isaacgym/python && pip install -e .
๐ By installing Isaac Gym as an editable package using pip install -e .
, youโll have more flexibility for various development tasks!
To ensure a successful installation, try running one of the included examples:
cd examples && python 1080_balls_of_solitude.py
๐ If the example runs smoothly, that means your Isaac Gym is up and running!
2.4 Install PPO Implementation (rsl_rl) ๐ค
Now, letโs install the rsl_rl library, which is a policy for reinforcement learning! ๐ช First, clone the library:
git clone https://github.com/leggedrobotics/rsl_rl
Then enter the library directory and install:
cd rsl_rl && git checkout v1.0.2 && pip install -e .
๐ This ensures you are using the specified version of the project to avoid any compatibility issues!
2.5 Install Unitree RL GYM ๐
Now, we can finally install Unitree RL GYM! ๐ Enter the project folder and run:
pip install -e .
๐ง This step also installs Unitree RL GYM in editable mode!
2.6 Install unitree_sdk2py (optional) ๐ฆ
If you plan to run the models on a physical robot, installing unitree_sdk2py
is essential! Hereโs the command to clone the SDK:
git clone https://github.com/unitreerobotics/unitree_sdk2_python
Once you enter the SDK directory, you can install it with:
cd unitree_sdk2_python && pip install -e .
๐ Now youโre ready to run your trained models on physical robots!
After completing the above steps, your environment is all set up! Itโs time to have fun and train your robots!
3. Training with Unitree RL GYM ๐
Great, with the environment ready, we can now begin training our robot using Isaac Gym! Hereโs how to start the training!
3.1 Training ๐๏ธ
To initiate the training, simply type the following command in the terminal:
python legged_gym/scripts/train.py --task=go2
๐ Here, we specify the task as go2
, indicating weโre starting the training for that specific robot! If you need to run the simulation on the CPU, you can add the following parameters:
--sim_device=cpu --rl_device=cpu
๐ This will cause all computations to run on the CPU, making it suitable for machines without NVIDIA graphics cards! During training, you can press the v
key to stop rendering, which can enhance training efficiency, and switch back anytime to check progress.
Once training is complete, the model will be saved in logs//_/model_.pt
, and you can load it whenever needed!
๐ Command-Line Parameter Breakdown:
--task TASK
: Name of the task to specify the type of robot youโre training (e.g.,go2
).--resume
: If there is a previous training checkpoint, you can restore training using this parameter.--experiment_name EXPERIMENT_NAME
: Name your experiment to help manage different trials.--num_envs NUM_ENVS
: Specify the number of environments created to assist with parallel training!
3.2 Play ๐
Next, you can interact with the trained model by running this command:
python legged_gym/scripts/play.py --task=go2
๐ By default, it will load the model from your last run in the most recent experiment! To load other runs or models, just set the load_run
and checkpoint
parameters accordingly. For example:
python legged_gym/scripts/play.py --task=go2 --load_run=<run_name> --checkpoint=<checkpoint_name>
๐ Make sure to replace <run_name>
and <checkpoint_name>
with your actual names!
3.3 Demonstration of Play ๐ฌ
With these straightforward commands, you can now directly interact with the trained policy, observing the robot’s performance in various environments, and enjoy the infinite joys of reinforcement learning!
4. Simulating in Mujoco ๐
If you prefer using Mujoco for training, here are specific steps for your reference!
4.1 Using Mujoco ๐
To simulate in Mujoco, simply execute the command:
python deploy/deploy_mujoco/deploy_mujoco.py {config_name}
๐ Here, {config_name}
refers to the name of the configuration file located in the deploy/deploy_mujoco/configs/
directory. For instance:
python deploy/deploy_mujoco/deploy_mujoco.py g1.yaml
๐ This will help you load the simulation configuration for the G1 robot!
5. Deploying on a Physical Robot ๐ค
If you want to deploy the trained model onto a real physical robot, make sure you have unitree_sdk2py
installed, as itโs crucial for connecting and communicating with the robot!
5.1 Deployment Steps โ๏ธ
- Ensure your physical robot is powered on and connected to your computer via serial or network.
- Use the interfaces provided by
unitree_sdk2py
to control the robot.
By following these steps, you can enjoy training and applications on Unitree RL GYM! ๐ The implementation of various features will help you easily dive into reinforcement learning, allowing for endless fun with the perfect blend of robotics and AI! ๐