This repository provides the environment used to train engineai-robots (and other robots) to walk on rough terrain using NVIDIA's Isaac Gym. It includes all components needed for sim-to-real transfer: actuator network, friction & mass randomization, noisy observations and random pushes during training.
2024.09.10 This is the full version of our repository which offers the basic operation of RL, including training, play(generate .pt model), sim2sim, and sim2rea(generate .onnx model for real deployment). This repository will be maintained and update continuously##
Maintainer: engienai
Affiliation: Engineai Robot, China(https://www.engineai.com.cn/)
Contact: info@engineai.com.cn
Project website: https://github.com/engineai-robotics/engineai_legged_gym
- Create a new python virtual env with python 3.8 (3.8 recommended)
- Install pytorch 1.10 with cuda-11.3:
pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117
- Install Isaac Gym
- Download and install Isaac Gym Preview 4 from https://developer.nvidia.com/isaac-gym
cd isaacgym/python && pip install -e .
- Try running an example
cd examples && python 1080_balls_of_solitude.py
- For troubleshooting check docs
isaacgym/docs/index.html
- Install rsl_rl(PPO implementation)
- Clone this repository
cd rsl_rl && pip install -e .
- Install engineai_legged_gym
cd engineai_legged_gym && pip install -e .
- Each environment is defined by an env file (
legged_robot.py
) and a config file (legged_robot_config.py
). The config file contains two classes: one containing all the environment parameters (LeggedRobotCfg
) and one for the training parameters (LeggedRobotCfgPPo
). - Both env and config classes use inheritance.
- Each non-zero reward scale specified in
cfg
will add a function with a corresponding name to the list of elements which will be summed to get the total reward. - Tasks must be registered using
task_registry.register(name, EnvClass, EnvConfig, TrainConfig)
. This is done inenvs/__init__.py
, but can also be done from outside of this repository.
-
Train:
python legged_gym/scripts/train.py --task=zqsa01
- To run on CPU, add following arguments:
--sim_device=cpu
,--rl_device=cpu
(sim on CPU and rl on GPU is possible). - To run headless (no rendering) add
--headless
. - Important: To improve performance, once the training starts press
v
to stop the rendering. You can then enable it later to check the progress. - The trained policy is saved in
engineai_legged_gym/logs/<experiment_name>/<date_time>_<run_name>/model_<iteration>.pt
. Where<experiment_name>
and<run_name>
are defined in the train config. - The following command line arguments override the values set in the config files:
- --task TASK: Task name.
- --resume: Resume training from a checkpoint
- --experiment_name EXPERIMENT_NAME: Name of the experiment to run or load.
- --run_name RUN_NAME: Name of the run.
- --load_run LOAD_RUN: Name of the run to load when resume=True. If -1: will load the last run.
- --checkpoint CHECKPOINT: Saved model checkpoint number. If -1: will load the last checkpoint.
- --num_envs NUM_ENVS: Number of environments to create.
- --seed SEED: Random seed.
- --max_iterations MAX_ITERATIONS: Maximum number of training iterations.
- To run on CPU, add following arguments:
-
Play a trained policy:
python legged_gym/scripts/play.py --task=zqsa01
- By default, the loaded policy is the last model of the last run of the experiment folder.
- Other runs/model iteration can be selected by setting
load_run
andcheckpoint
in the train config.
-
sim2sim:
python legged_gym/scripts/sim2sim_zqsa01.py --load_model logs/zqsa01_ppo/0_exported/policies/policy_1.pt
-
sim2real:
python sim2real_deploy/export_onnx_policy.py
- a file named zqsa01_policy.onnx is generated, and this file can be used to replace the one in the real robot deployment. Check the deployment process with the README.md in our repositoryeigineai_humanoid
The base environment legged_robot
implements a rough terrain locomotion task. The corresponding cfg does not specify a robot asset (URDF/ MJCF) and has no reward scales.
- Add a new folder to
envs/
with'<your_env>_config.py
, which inherit from an existing environment cfgs - If adding a new robot:
- Add the corresponding assets to
resources/
. - In
cfg
set the asset path, define body names, default_joint_positions and PD gains. Specify the desiredtrain_cfg
and the name of the environment (python class). - In
train_cfg
setexperiment_name
andrun_name
- Add the corresponding assets to
- (If needed) implement your environment in <your_env>.py, inherit from an existing environment, overwrite the desired functions and/or add your reward functions.
- Register your env in
legged_gym/envs/__init__.py
. - Modify/Tune other parameters in your
cfg
,cfg_train
as needed. To remove a reward set its scale to zero. Do not modify parameters of other envs!
If you are a beginner of RL, please refer to the detail process of adding a new environment with this repo legged_gym
- If you get the following error:
ImportError: libpython3.8m.so.1.0: cannot open shared object file: No such file or directory
, do:sudo apt install libpython3.8
. It is also possible that you need to doexport LD_LIBRARY_PATH=/path/to/libpython/directory
/export LD_LIBRARY_PATH=/path/to/conda/envs/your_env/lib
(for conda user. Replace /path/to/ to the corresponding path.).
- The contact forces reported by
net_contact_force_tensor
are unreliable when simulating on GPU with a triangle mesh terrain. A workaround is to use force sensors, but the force are propagated through the sensors of consecutive bodies resulting in an undesirable behaviour. However, for a legged robot it is possible to add sensors to the feet/end effector only and get the expected results. When using the force sensors make sure to exclude gravity from the reported forces withsensor_options.enable_forward_dynamics_forces
. Example:
sensor_pose = gymapi.Transform()
for name in feet_names:
sensor_options = gymapi.ForceSensorProperties()
sensor_options.enable_forward_dynamics_forces = False # for example gravity
sensor_options.enable_constraint_solver_forces = True # for example contacts
sensor_options.use_world_frame = True # report forces in world frame (easier to get vertical components)
index = self.gym.find_asset_rigid_body_index(robot_asset, name)
self.gym.create_asset_force_sensor(robot_asset, index, sensor_pose, sensor_options)
(...)
sensor_tensor = self.gym.acquire_force_sensor_tensor(self.sim)
self.gym.refresh_force_sensor_tensor(self.sim)
force_sensor_readings = gymtorch.wrap_tensor(sensor_tensor)
self.sensor_forces = force_sensor_readings.view(self.num_envs, 4, 6)[..., :3]
(...)
self.gym.refresh_force_sensor_tensor(self.sim)
contact = self.sensor_forces[:, :, 2] > 1.