Policy Pre-training for Autonomous Driving via Self-supervised Geometric Modeling
- Penghao Wu, Li Chen, Hongyang Li, Xiaosong Jia, Junchi Yan, Yu Qiao
- arXiv Paper | openreview, ICLR 2023
- video | blog
This repository contains the pytorch implementation for PPGeo in the paper Policy Pre-training for Autonomous Driving via Self-supervised Geometric Modeling. PPGeo is a fully self-supervised driving policy pre-training framework to learn from unlabeled driving videos.
Model | Google Drive Link | BaiduYun Link |
---|---|---|
Visual Encoder (ResNet-34) | ckpt | ckpt (code: itqi) |
DepthNet | ckpt | ckpt (code: xvof) |
PoseNet | ckpt | ckpt (code: fp2n) |
- Clone the repo and build the environment.
git clone https://github.com/OpenDriveLab/PPGeo.git
cd PPGeo
conda env create -f environment.yml --name PPGeo
conda activate PPGeo
-
Download the driving video dataset based on the instructions in ACO.
-
Make a symlink to the dataset root.
ln -s DATA_ROOT data
- Preprocess the data.
python ytb_data_preprocess.py
- First stage training.
python train.py --id ppgeo_stage1_log --stage 1 --epochs 30
- Second stage training.
python train.py --id ppgeo_stage2_log --stage 2 --epochs 20 --ckpt PATH_TO_STAGE1_CKPT
- Please download the nuScenes dataset first
- Make a symlink to the nuScenes dataset root.
cd nuscenes_planning
cd data
ln -s nuScenes_data_root nuscenes
cd ..
- Training the planning model
python train_planning.py --pretrained_ckpt PATH_TO_STAGE2_CKPT
We use the DI-drive engine for IL data collection, IL training, IL evaluation, and PPO training following ACO with carla version 0.9.9.4. Some additional details can be found here.
We use the TCP codebase for training and evaluation with default setting.
If you find our repo or our paper useful, please use the following citation:
@inproceedings{wu2023PPGeo,
title={Policy Pre-training for Autonomous Driving via Self-supervised Geometric Modeling},
author={Penghao Wu and Li Chen and Hongyang Li and Xiaosong Jia and Junchi Yan and Yu Qiao},
booktitle={International Conference on Learning Representations},
year={2023}
}
All code within this repository is under Apache License 2.0.
Our code is based on monodepth2.