Skip to content

felipemohr/IsaacLab-Quadruped-Tasks

Repository files navigation

Isaac Lab Quadruped Tasks Extension

image

IsaacSim Isaac Lab Python Linux platform Windows platform pre-commit License

Overview

This repository contains an extension with tasks for training quadruped robots using Reinforcement Learning in Isaac Lab.

So far, there are three types of tasks that can be used to train three different quadruped robots:

Flat terrain Rough terrain Stairs terrain
ANYmal D (ANYbotics) Isaac-Quadruped-AnymalD-Blind-Flat-v0 Isaac-Quadruped-AnymalD-Blind-Rough-v0 Isaac-Quadruped-AnymalD-Blind-Stairs-v0
Go2 (Unitree) Isaac-Quadruped-Go2-Blind-Flat-v0 Isaac-Quadruped-Go2-Blind-Rough-v0 Isaac-Quadruped-Go2-Blind-Stairs-v0
Spot (Boston Dynamics) Isaac-Quadruped-Spot-Blind-Flat-v0 Isaac-Quadruped-Spot-Blind-Rough-v0 Isaac-Quadruped-Spot-Blind-Stairs-v0

Installation

  1. Begin by installing NVIDIA's Isaac Sim and Isaac Lab.
  2. This repository includes an Isaac Lab extension with the quadruped tasks. To install it, follow these steps:
$ git clone git@github.com:felipemohr/IsaacLab-Quadruped-Tasks.git
$ cd IsaacLab-Quadruped-Tasks
$ conda activate isaaclab
$ python -m pip install -e exts/omni.isaac.lab_quadruped_tasks

Training the Quadruped Agent

Use the rsl_rl/train.py script to train the robot, specifying the task:

$ python scripts/rsl_rl/train.py --task Isaac-Quadruped-Go2-Blind-Flat-v0 --headless

The available tasks are listed in the Overview table, and the --headless flag is used to disable the viewport, to speed up the training significantly.

The following arguments are optional, but can be used to specify the training configurations:

  • --num_envs - Number of environments to simulate (default is 1024)
  • --max_iterations - Maximum number of iterations to train (default is 8000)
  • --save_interval - The number of iterations between saves (default is 500)
  • --seed - Seed used for the environment (default is 42)

If you want to enable video clips recording during training, you can include the following arguments, along with --enable_cameras and --video flags:

  • --video_length - Length of each recorded video, in steps (default is 400)
  • --video_interval - Interval between each video recording, in steps (default is 24000)

The entire command would be something like:

$ python scripts/rsl_rl/train.py --task Isaac-Quadruped-Go2-Blind-Flat-v0 --num_envs 1024 --max_iterations 8000 --save_interval 500 --seed 42 --headless --enable_cameras --video --video_length 400 --video_interval 24000

To resume the training from a checkpoint, you can set the --resume to True and specify the run directory and checkpoint.

  • --resume - Whether to resume the training (default is False)
  • --load_run - The run directory to load (default is ".*", the latest in alphabetical order matching run will be loaded)
  • --load_checkpoint - The checkpoint file to load (default is "model_.*.pt", the latest in alphabetical order matching file will be loaded)

Alternativelly, you can directly set the relative path to the checkpoint file with the --checkpoint_path argument:

$ python scripts/rsl_rl/train.py --task Isaac-Quadruped-Go2-Blind-Stairs-v0 --num_envs 1024 --max_iterations 4000 --resume True --checkpoint_path models/go2_blind_rough/model_8k.pt

Training logs will be generated in the directory where the training script was executed. Visualize these logs using TensorBoard:

$ python -m tensorboard.main --logdir=$PATH_TO_YOUR_LOGS_DIR$

Playing the Trained Agent

Use the rsl_rl/play.py script to play the trained agent, specifying the task and the model path:

$ python scripts/rsl_rl/play.py --task Isaac-Quadruped-Go2-Blind-Flat-Play-v0 --num_envs 64 --checkpoint_path logs/rsl_rl/go2_blind_flat/XXXX-XX-XX_XX-XX-XX/model_XXXX.pt

The --num_envs argument is optional and can also be used to define the number of environments to simulate (default is 64).

Note that the task used ends with -Play-v0 instead of just -v0. This task is exactly the same as the one used for training, but excluding the randomization terms used to make the agent more robust.

You can also use the pre-trained models present in models folder:

$ python scripts/rsl_rl/play.py --task Isaac-Quadruped-Go2-Blind-Rough-Play-v0 --checkpoint_path models/go2_blind_rough/model_8k.pt

Results

Below are some videos recorded during the training process, for each of the tasks.

Blind locomotion, flat terrain:

rl-video-step-440000.mp4

Blind locomotion, rough terrain:

rl-video-step-440000.mp4

About

Quadruped Tasks extension based on Isaac Lab.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages