Skip to content

haibo-qiu/GFNet

Repository files navigation

GFNet arXiv TMLR Project

This is the Pytorch implementation of our following paper:

GFNet: Geometric Flow Network for 3D Point Cloud Semantic Segmentation
Accepted by TMLR, 2022
Haibo Qiu, Baosheng Yu and Dacheng Tao

Abstract

Point cloud semantic segmentation from projected views, such as range-view (RV) and bird's-eye-view (BEV), has been intensively investigated. Different views capture different information of point clouds and thus are complementary to each other. However, recent projection-based methods for point cloud semantic segmentation usually utilize a vanilla late fusion strategy for the predictions of different views, failing to explore the complementary information from a geometric perspective during the representation learning. In this paper, we introduce a geometric flow network (GFNet) to explore the geometric correspondence between different views in an align-before-fuse manner. Specifically, we devise a novel geometric flow module (GFM) to bidirectionally align and propagate the complementary information across different views according to geometric relationships under the end-to-end learning scheme. We perform extensive experiments on two widely used benchmark datasets, SemanticKITTI and nuScenes, to demonstrate the effectiveness of our GFNet for project-based point cloud semantic segmentation. Concretely, GFNet not only significantly boosts the performance of each individual view but also achieves state-of-the-art results over all existing projection-based models.

Segmentation GIF

vis
(A gif of segmentation results on SemanticKITTI by GFNet)

Framework

framework gfm

Table of Contents

Installation

  1. Clone this repo:
    git clone https://github.com/haibo-qiu/GFNet.git
  2. Create a conda env with
    conda env create -f environment.yml
    Note that we also provide the Dockerfile for an alternative setup method.

Data preparation

  1. Download point clouds data from SemanticKITTI and nuScenes.
  2. For SemanticKITTI, directly unzip all data into dataset/SemanticKITTI.
  3. For nuScenes, first unzip data to dataset/nuScenes/full and then use the following cmd to generate pkl files for both training and testing:
    python dataset/utils_nuscenes/preprocess_nuScenes.py
  4. Final data folder structure will look like:
       dataset
       └── SemanticKITTI
           └── sequences
               ├── 00
               ├── ...
               └── 21
       └── nuScenes
           └── full
               ├── lidarseg
               ├── smaples
               ├── v1.0-{mini, test, trainval}
               └── ...
           └── nuscenes_train.pkl
           └── nuscenes_val.pkl
           └── nuscenes_trainval.pkl
           └── nuscenes_test.pkl
    
    

Training

  • Please refer to configs/semantic-kitti.yaml and configs/nuscenes.yaml for dataset specific properties.
  • Download the pretrained resnet model to pretrained/resnet34-333f7ec4.pth.
  • The hyperparams for training are included in configs/resnet_semantickitti.yaml and configs/resnet_nuscenes.yaml. After modifying corresponding settings to satisfy your purpose, the network can be trained in an end-to-end manner by:
    1. ./scripts/start.sh on SemanticKITTI.
    2. ./scripts/start_nuscenes.sh on nuScenes.

Inference

SemanticKITTI

  1. Download gfnet_63.0_semantickitti.pth.tar into pretrained/.
  2. Evaluate on SemanticKITTI valid set by:
    ./scripts/infer.sh
    Alternatively, you can use the official semantic-kitti api for evaluation.
  3. To reproduce the results we submitted to the test server:
    1. download gfnet_submit_semantickitti.pth.tar into pretrained/,
    2. uncomment and run the second cmd in ./scripts/infer.sh.
    3. zip path_to_results_folder/sequences for submission.

nuScenes

  1. Download gfnet_76.8_nuscenes.pth.tar into pretrained/.
  2. Evaluate on nuScenes valid set by:
    ./scripts/infer_nuscenes.sh
  3. To reproduce the results we submitted to the test server:
    1. download gfnet_submit_nuscenes.pth.tar into pretrained/.
    2. uncomment and run the second cmd in ./scripts/infer_nuscenes.sh.
    3. check the valid format of predictions by:
      ./dataset/utils_nuscenes/check.sh
      where result_path needs to be modified correspondingly.
    4. submit the dataset/nuScenes/preds.zip to the test server.

Acknowledgment

This repo is built based on lidar-bonnetal, PolarSeg and kprnet. Thanks the contributors of these repos!

Citation

If you use our code or results in your research, please consider citing with:

@article{qiu2022gfnet,
  title={{GFN}et: Geometric Flow Network for 3D Point Cloud Semantic Segmentation},
  author={Haibo Qiu and Baosheng Yu and Dacheng Tao},
  journal={Transactions on Machine Learning Research},
  year={2022},
  url={https://openreview.net/forum?id=LSAAlS7Yts},
}

About

[TMLR 2022] Geometric Flow Network for 3D Point Cloud Semantic Segmentation

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published