Skip to content

First 4D Radar Automatic Labelling tools using Segment Anything (SA) drivable area segmentation on camera using Deep Learning for Autonomous Vehicle

License

Notifications You must be signed in to change notification settings

christofel04/4D-LATTE-4D-Radar-Drivable-Area-Automatic-Labelling-using-Advanced-Deep-Learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

4D- LATTE : 4D Radar Drivable Area Automatic Labelling Tools using Deep Learning

image

First 4D Radar Automatic Labelling tools using Segment Anything (SA) drivable area segmentation on camera using Deep Learning for Autonomous Vehicle. KAIST-Radar (K-Radar) (provided by 'AVELab') is a novel large-scale object detection dataset and benchmark that contains 35K frames of 4D Radar tensor (4DRT) data with power measurements along the Doppler, range, azimuth, and elevation dimensions, together with carefully annotated 3D bounding box labels of objects on the roads. K-Radar includes challenging driving conditions such as adverse weathers (fog, rain, and snow) on various road structures (urban, suburban roads, alleyways, and highways). In addition to the 4DRT, we provide auxiliary measurements from carefully calibrated high-resolution Lidars, surround stereo cameras, and RTK-GPS. This repository provides the K-Radar dataset, annotation tool for 3d bounding boxes, and the visualization tool for showing the inference results and calibrating the sensors.

image

The URLs listed below are useful for using the K-Radar dataset and benchmark:

License and Commercialization Inquiries

The K-Radar dataset is published under the CC BY-NC-ND License, and all codes are published under the Apache License 2.0.

The technologies in this repository have been developed by AVELab and are being commercialized by Zeta Mobility. For commercialization inquiries, please contact Zeta Mobility (e-mail: zeta@zetamobility.com).

How to Use Automatic Labelling Tools using Deep Learning

Process of automatic labelling Drivable Area can be seen as below.

  1. Download the current repository to download all codes and resource. You can can download the current repository in Linux OS as below.
git clone https://github.com/christofel04/4D-Radar-Drivable-Area-Automatic-Labelling-using-Advanced-Deep-Learning.git
  1. Then go to directory of 4D Radar Automatic Labelling Tools and install all needed python packages using python package installation (pip) as below.
sudo pip install -r requirements.txt
  1. Then install Image Segmentation using Deep Learning Segment-Anything. You can install the image segmentation using Deep Learning Segment-Anything based on the repository of Segment Anything . Then download the pretrained model of Deep Learning Segment-Anything with large model (VIT-L Model).

  2. Then download camera images, LiDAR pcd and meta files of one K- Radar Dataset scene in one K-Radar Dataset Folder. Example of directoy of K-Radar Dataset Folder can be seen as below.

KRadar_Dataset_Folder
      ├── 20_cam 
      ├── 20_lpc
      ├── 20_meta
      ├── ...
  1. Then run the 4D Radar Automatic Labelling tools using this command.

python3 main.py

  1. Then press button Open Folder in top right corner and open one folder image scene of K-Radar dataset you will label. Make sure 4K- Radar image, LiDAR and calibration folder put in the same directory with 4K- Radar image.

  2. You can select image that you will label using Left and Right buttons or selecting the image in the image list box in below right corner.

  3. Then you can make Drivable Area segmentation in the image by selecting Drivable Area in the image by clicking button Add Attention Point.

K-Radar Dataset

For the preparation of the dataset and pipeline, please refer to the following document: Dataset Preparation Guide.

We provide the K-Radar dataset in three ways to ensure effective deployment of the large-sized data (total 16TB):

  1. Access to the total dataset via our local server
  2. A portion of the dataset via Google Drive
  3. The total dataset via shipped HDD (Hard Disk Drive)

For more details, please refer to the dataset documentation.

Detection

This is the documentation for how to use our detection frameworks with K-Radar dataset. We tested the K-Radar detection frameworks on the following environment:

  • Python 3.8.13 (3.10+ does not support open3d.)
  • Ubuntu 18.04/20.04
  • Torch 1.11.0+cu113
  • CUDA 11.3
  • opencv 4.2.0.32
  • open3d 0.15.2

For the preparation and quantitative results of the 4D Radar-based object detection, please refer to the following document: Detection Pipeline Guide.

The images below showcase qualitative results, with green boxes representing detected objects using 4D Radar. From left to right, the images depict (1) front-facing camera images (for reference), (2) LiDAR point clouds (for reference), and (3) 4D Radar tensors (input for the neural network). Note that the camera images and LiDAR point clouds are shown for reference purposes only, and the bounding boxes from the 4D Radar-only detection are projected onto these visualizations.

Pre-processing

This is the documentation for how to use our auto-labeling frameworks with the K-Radar dataset:

For the preparation and quantitative results of the 4D Radar-based object detection, please refer to the following document: Pre-processing Pipeline Guide.

As shown in the figure below, the pre-processing of 4D Radar consists of two stages. The first stage extracts the main measurements with high power from the 4D Radar tensor (e.g., percentile in RTNH and CFAR in common), while the second stage removes noise from the first stage output. In the first stage, it is important to exclude peripheral measurements with low power and extract measurements with an appropriate density (typically 3~10%) to allow the detection network to recognize the shape of the object (refer to the paper). However, extracting measurements at this density introduces noise called sidelobe, which interferes with precise localization of the object, as shown in the figure below. Therefore, pre-processing that indicates sidelobe to the network is essential, and applying this alone greatly improves detection performance, particularly the localization performance of objects (refer to the paper). For quantitative results and pre-processed data, please refer to the pre-processing document.

Auto-labeling

This is the documentation for how to use our auto-labeling frameworks with the K-Radar dataset:

For the preparation and quantitative results of the 4D Radar-based object detection, please refer to the following document: Auto-labeling Pipeline Guide.

The figure below shows the auto-labeling results for six road environments in the K-Radar dataset. Each cell, from left to right, displays the handmade labels, the detection results of the 4D Radar detection network trained with handmade labels (RTNH), and the network trained with automatically generated labels (i.e., auto-labels). Notably, the RTNH trained with auto-labels only from clear weather conditions performs robustly even in inclement weather. This implies that the distribution of 4D Radar data depends more on road conditions than weather conditions, allowing for the training of an all-weather resilient 4D Radar detection network using only clear weather auto-labels. For more details and quantitative results, refer to the auto-labeling document.

Regarding the commercialization and the usage of auto-labeling technologies, please contact Zeta Mobility.

Odometry

We provide the location of a GPS antenna, essential for accurate ground-truth odometry. This location is precisely processed by integrating data from high-resolution LiDAR, RTK-GPS, and IMU data. To ensure the utmost accuracy, we verify the vehicle's location by correlating the LiDAR sensor data against a detailed, high-resolution map, as illustrated below. For security purposes, we present this location information in local coordinates rather than global coordinates (i.e., UTM). The data of Sequence 1 is accessible in the resources/odometry directory. The repository for the odometry will be made available within the next few weeks.

Acknowledgement

The K-Radar dataset is contributed by Dong-Hee Paek, Kevin Tirta Wijaya, Dong-In Kim, Min-Hyeok Sun, Sangjae Cho, and Hong-Woo Seok, and advised by Seung-Hyun Kong.

We thank the maintainers of the following projects that enable us to develop K-Radar: OpenPCDet by MMLAB, Rotated_IoU by lilanxiao, and kitti-object-eval-python by traveller59.

We extend our gratitude to Jen-Hao Cheng, Sheng-Yao Kuan, Aishi Huang, Hou-I Liu, Christine Wu and Wenzheng Zhao in the Information Processing Lab at the University of Washington for providing the refined K-Radar label.

This work was partly supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 01210790) and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1A2C3008370).

Citation

If you find this work is useful for your research, please consider citing:

@inproceedings{
  paek2022kradar,
  title={K-Radar: 4D Radar Object Detection for Autonomous Driving in Various Weather Conditions},
  author={Dong-Hee Paek and Seung-Hyun Kong and Kevin Tirta Wijaya},
  booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
  year={2022},
  url={https://openreview.net/forum?id=W_bsDmzwaZ7}
}

About

First 4D Radar Automatic Labelling tools using Segment Anything (SA) drivable area segmentation on camera using Deep Learning for Autonomous Vehicle

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages