Zehan Zheng, Fan Lu, Weiyi Xue, Guang Chenβ , Changjun Jiang (β Corresponding author)
CVPR 2024
Paper (arXiv) | Paper (CVPR) | Project Page | Video | Poster | Slides
This repository is the official PyTorch implementation for LiDAR4D.
Table of Contents
2024-6-1:πΉοΈ We release the simulator for easier rendering and manipulation. Happy Children's Day and Have Fun!
2024-5-4:π We update flow fields and improve temporal interpolation.
2024-4-13:π We update U-Net of LiDAR4D for better ray-drop refinement.
2024-4-5:π Code of LiDAR4D is released.
2024-4-4:π₯ You can reach the preprint paper on arXiv as well as the project page.
2024-2-27:π Our paper is accepted by CVPR 2024.
LiDAR4D_demo.mp4
LiDAR4D is a differentiable LiDAR-only framework for novel space-time LiDAR view synthesis, which reconstructs dynamic driving scenarios and generates realistic LiDAR point clouds end-to-end. It adopts 4D hybrid neural representations and motion priors derived from point clouds for geometry-aware and time-consistent large-scale scene reconstruction.
git clone https://github.com/ispc-lab/LiDAR4D.git
cd LiDAR4D
conda create -n lidar4d python=3.9
conda activate lidar4d
# PyTorch
# CUDA 12.1
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
# CUDA 11.8
# pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118
# CUDA <= 11.7
# pip install torch==2.0.0 torchvision torchaudio
# Dependencies
pip install -r requirements.txt
# Local compile for tiny-cuda-nn
git clone --recursive https://github.com/nvlabs/tiny-cuda-nn
cd tiny-cuda-nn/bindings/torch
python setup.py install
# compile packages in utils
cd utils/chamfer3D
python setup.py install
KITTI-360 dataset (Download)
We use sequence00 (2013_05_28_drive_0000_sync
) for experiments in our paper.
Download KITTI-360 dataset (2D images are not needed) and put them into data/kitti360
.
(or use symlinks: ln -s DATA_ROOT/KITTI-360 ./data/kitti360/
).
The folder tree is as follows:
data
βββ kitti360
βββ KITTI-360
βββ calibration
βββ data_3d_raw
βββ data_poses
Next, run KITTI-360 dataset preprocessing: (set DATASET
and SEQ_ID
)
bash preprocess_data.sh
After preprocessing, your folder structure should look like this:
configs
βββ kitti360_{sequence_id}.txt
data
βββ kitti360
βββ KITTI-360
β βββ calibration
β βββ data_3d_raw
β βββ data_poses
βββ train
βββ transforms_{sequence_id}test.json
βββ transforms_{sequence_id}train.json
βββ transforms_{sequence_id}val.json
Set corresponding sequence config path in --config
and you can modify logging file path in --workspace
. Remember to set available GPU ID in CUDA_VISIBLE_DEVICES
.
Run the following command:
# KITTI-360
bash run_kitti_lidar4d.sh
KITTI-360 Dynamic Dataset (Sequences: 2350
4950
8120
10200
10750
11400
)
Method | Point Cloud | Depth | Intensity | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
CDβ | F-Scoreβ | RMSEβ | MedAEβ | LPIPSβ | SSIMβ | PSNRβ | RMSEβ | MedAEβ | LPIPSβ | SSIMβ | PSNRβ | |
LiDAR-NeRF | 0.1438 | 0.9091 | 4.1753 | 0.0566 | 0.2797 | 0.6568 | 25.9878 | 0.1404 | 0.0443 | 0.3135 | 0.3831 | 17.1549 |
LiDAR4D (Ours) β | 0.1002 | 0.9320 | 3.0589 | 0.0280 | 0.0689 | 0.8770 | 28.7477 | 0.0995 | 0.0262 | 0.1498 | 0.6561 | 20.0884 |
KITTI-360 Static Dataset (Sequences: 1538
1728
1908
3353
)
Method | Point Cloud | Depth | Intensity | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
CDβ | F-Scoreβ | RMSEβ | MedAEβ | LPIPSβ | SSIMβ | PSNRβ | RMSEβ | MedAEβ | LPIPSβ | SSIMβ | PSNRβ | |
LiDAR-NeRF | 0.0923 | 0.9226 | 3.6801 | 0.0667 | 0.3523 | 0.6043 | 26.7663 | 0.1557 | 0.0549 | 0.4212 | 0.2768 | 16.1683 |
LiDAR4D (Ours) β | 0.0834 | 0.9312 | 2.7413 | 0.0367 | 0.0995 | 0.8484 | 29.3359 | 0.1116 | 0.0335 | 0.1799 | 0.6120 | 19.0619 |
β : The latest results better than the paper.
Experiments are conducted on the NVIDIA 4090 GPU. Results may be subject to some variation and randomness.
After reconstruction, you can use the simulator to render and manipulate LiDAR point clouds in the whole scenario. It supports dynamic scene re-play, novel LiDAR configurations (--fov_lidar
, --H_lidar
, --W_lidar
) and novel trajectory (--shift_x
, --shift_y
, --shift_z
).
We also provide a simple demo setting to transform LiDAR configurations from KITTI-360 to NuScenes, using --kitti2nus
in the bash script.
Check the sequence config and corresponding workspace and model path (--ckpt
).
Run the following command:
bash run_kitti_lidar4d_sim.sh
The results will be saved in the workspace folder.
We sincerely appreciate the great contribution of the following works:
If you find our repo or paper helpful, feel free to support us with a star π or use the following citation:
@inproceedings{zheng2024lidar4d,
title = {LiDAR4D: Dynamic Neural Fields for Novel Space-time View LiDAR Synthesis},
author = {Zheng, Zehan and Lu, Fan and Xue, Weiyi and Chen, Guang and Jiang, Changjun},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2024}
}
All code within this repository is under Apache License 2.0.