Skip to content

Latest commit

 

History

History
210 lines (170 loc) · 9.95 KB

README.md

File metadata and controls

210 lines (170 loc) · 9.95 KB

Anchor3DLane

This repo is the official PyTorch implementation for paper:

Anchor3DLane: Learning to Regress 3D Anchors for Monocular 3D Lane Detection. Accepted by CVPR 2023.

pipeline In this paper, we define 3D lane anchors in the 3D space and propose a BEV-free method named Anchor3DLane to predict 3D lanes directly from FV representations. 3D lane anchors are projected to the FV features to extract their features which contain both good structural and context information to make accurate predictions. We further extend Anchor3DLane to the multi-frame setting to incorporate temporal information for performance improvement.

Update

  • [2023/06/02] We have added the code to generate data lists in the data conversion tools.
  • [2023/06/15] We have supported testing with multiple GPUs.
  • [2024/04/02] We have released checkpoints of the single-frame Anchor3DLane on the latest version of Openlane dataset.

Installation


Step 1. Create a conda virtual environment and activate it

conda create -n lane3d python=3.7 -y
conda activate lane3d
conda install pytorch==1.9.1 torchvision==0.10.1 cudatoolkit=11.1 -c pytorch -y

Step 2. Install dependencies

pip install -U openmim
mim install mmcv-full
pip install -r requirements.txt

Refer to ONCE-3dlane to install jarvis.

Step 3. Install Anchor3DLane

git clone https://github.com/tusen-ai/Anchor3DLane.git
cd Anchor3DLane
python setup.py develop

Compile multi-scale deformable attention:

cd mmseg/models/utils/ops
sh make.sh

This repo is implemented based on open-mmlab mmsegmentation-v0.26.0. Refer to here for more detailed information of installation.

Data Preparation

The data folders are organized as follows:

├── data/
|   └── Apollosim
|       └── data_splits  
|           └── standard
|               └── train.json
|               └── test.json
|           └── ...
|       └── data_lists/...
|       └── images/...
|       └── cache_dense/...  # processed lane annotations
|   └── OpenLane
|       └── data_splits/...  
|       └── data_lists/...   # list of training/testing data
|       └── images/...
|       └── lane3d_1000/...  # original lane annotations
|       └── cache_dense/...  
|       └── prev_data_release/...  # temporal poses
|   └── ONCE/
|       └── raw_data/  # camera images
|       └── annotations/  # original lane annotations
|           └── train/...
|           └── val/...
|       └── data_splits/...  
|       └── data_lists/... 
|       └── cache_dense/

Note: For the data_lists files, you can generate it by running our data conversion tools as mentioned below. We also provide the data lists we used in the data/ folder of this repo.

ApolloSim

1. Download dataset from ApolloSim Dataset and organize the data folder as mentioned above.

2. Change the data path in apollosim.py and generate annotation pickle files by running:

cd tools/convert_datasets
python apollosim.py [apollo_root]

OpenLane

1. Refer to OpenLane Dataset for data downloading and organize the data folder as mentioned above.

2. Merge annotations and generate pickle files by running:

cd tools/convert_dataset
python openlane.py [openlane_root] --merge
python openlane.py [openlane_root] --generate

(optional) 3. If you wish to run the multi-frame experiments, you need to download the cross-frame pose data processed by us from Baidu Drive.

We also provide the cross-frame pose extraction script at tools/convert_dataset/openlane_temporal.py to allow customized use. You can fetch the raw pose data link at Baidu Drive or extract the raw pose data with tools provided in save_pose().

ONCE-3DLane

1. Refer to ONCE-3DLane Dataset for data downloading and organize the data folder as mentioned above.

2. Merge annotations and generate pickle files by running the following commands:

cd tools/convert_dataset
python once.py [once_root] --merge
python once.py [once_root] --generate

Model Zoo

We provide the pretrained weights of Anchor3DLane and Anchor3DLane+(w/ iterative regression) on ApolloSim-Standard and ONCE-3DLane datasets. For OpenLane dataset, we additional provide weights for Anchor3DLane-T+(with multi-frame interaction).

ApolloSim

Model F1 AP x error close/m x error far/m z error close/m z error far/m Baidu Drive Link
Anchor3DLane 95.6 97.2 0.052 0.306 0.015 0.223 download
Anchor3DLane+ 97.1 95.4 0.045 0.300 0.016 0.223 download

OpenLane

Results on OpenLane-v1.1.

Model Backbone F1 Cate Acc x error close/m x error far/m z error close/m z error far/m Baidu Drive Link
Anchor3DLane ResNet-18 53.1 90.0 0.300 0.311 0.103 0.139 download
Anchor3DLane EfficientNet-B3 56.0 89.0 0.293 0.317 0.103 0.130 download
Anchor3DLane+ ResNet-18 53.7 90.9 0.276 0.311 0.107 0.138 download
Anchor3DLane-T+ ResNet-18 54.3 90.7 0.275 0.310 0.105 0.135 download

Note: We use an earlier version of the Openlane dataset in our paper, whose annotations are significantly inconsistent in lane points' coordinates with the latest version as mentioned in this issue. Thus, it is normal if you cannot obtain the performances reported in our paper by testing the checkpoints we provided on the latest OpenLane validation set. But you can still reproduce the performances by training on the training set of the latest version.

Results of Openlane-v1.2.

We also provide checkpoints of single-frame settings on the latest Opanlane dataset for validation.

Model Backbone F1 Cate Acc x error close/m x error far/m z error close/m z error far/m Baidu Drive Link
Anchor3DLane ResNet-18 53.1 88.9 0.269 0.293 0.079 0.111 download
Anchor3DLane+ ResNet-18 53.6 89.8 0.272 0.292 0.085 0.116 download
Anchor3DLane+ ResNet-50 57.5 91.9 0.229 0.243 0.079 0.106 download

ONCE-3DLane

Model Backbone F1 Precision Recall CD Error/m Baidu Drive Link
Anchor3DLane ResNet-18 74.44 80.50 69.23 0.064 download
Anchor3DLane EfficientNet-B3 75.02 83.22 68.29 0.064 download
Anchor3DLane+ ResNet-18 74.87 80.85 69.71 0.060 download

Testing

Run the following commands to evaluate the given checkpoint:

export PYTHONPATH=$PYTHONPATH:./gen-efficientnet-pytorch
python tools/test.py [config] [checkpoint] --show-dir [output_dir] --show(optional)

You can append --show to generate visualization results in the output_dir/vis.

For multi-gpu testing, run the following command:

bash tools/dist_test.sh [config] [checkpoint] [num_gpu] --show-dir [output_dir] --show(optional)
or
bash tools/slurm_test.sh [PARTITION] [JOB_NAME] [config] [checkpoint] --show-dir [output_dir] --show(optional)

Training

1. Download the pretrained weights from Baidu Drive and put them in ./pretrained/ directory.

2. Modify the work_dir in the [config] file as your desired output directory.

3. For single-gpu trainning, run the following command:

export PYTHONPATH=$PYTHONPATH:./gen-efficientnet-pytorch
python tools/train.py [config]

4. For multi-gpu training, run the following commands:

bash tools/dist_train.sh [config] [num_gpu] 
or
bash tools/slurm_train.sh [PARTITION] [JOB_NAME] [config]

Visualization

We represent the visualization results of Anchor3DLane on ApolloSim, OpenLane and ONCE-3DLane datasets.

  • Visualization results on ApolloSim dataset. apollo

  • Visualization results on OpenLane dataset. openlane

  • Visualization results on OpenLane dataset. once

Citation

If you find this repo useful for your research, please cite

@inproceedings{huang2023anchor3dlane,
  title = {Anchor3DLane: Learning to Regress 3D Anchors for Monocular 3D Lane Detection},
  author = {Huang, Shaofei and Shen, Zhenwei and Huang, Zehao and Ding, Zi-han and Dai, Jiao and Han, Jizhong and Wang, Naiyan and Liu, Si},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year = {2023}
}

Contact

For questions about our paper or code, please contact Shaofei Huang(nowherespyfly@gmail.com).