Skip to content

rasmushaugaard/surfemb

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SurfEmb

SurfEmb: Dense and Continuous Correspondence Distributions
for Object Pose Estimation with Learnt Surface Embeddings

Rasmus Laurvig Haugard, Anders Glent Buch
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2022
pre-print | project-site

The easiest way to explore correspondence distributions is through the project site.

The following describes how to reproduce the results.

Install

Download surfemb:

$ git clone https://github.com/rasmushaugaard/surfemb.git
$ cd surfemb

All following commands are expected to be run in the project root directory.

Install conda , create a new environment, surfemb, and activate it:

$ conda env create -f environment.yml
$ conda activate surfemb

Download BOP data

Download and extract datasets from the BOP site. Base archive, and object models are needed for both training and inference. For training, PBR-BlenderProc4BOP training images are needed as well, and for inference, the BOP'19/20 test images are needed.

Extract the datasets under data/bop (or make a symbolic link).

Model

Download a trained model (see releases):

$ wget https://github.com/rasmushaugaard/surfemb/releases/download/v0.0.1/tless-2rs64lwh.compact.ckpt -P data/models

OR

Train a model:

$ python -m surfemb.scripts.train [dataset] --gpus [gpu ids]

For example, to train a model on T-LESS on cuda:0

$ python -m surfemb.scripts.train tless --gpus 0

Inference data

We use the detections from CosyPose's MaskRCNN models, and sample surface points evenly for inference.
For ease of use, this data can be downloaded and extracted as follows:

$ wget https://github.com/rasmushaugaard/surfemb/releases/download/v0.0.1/inference_data.zip
$ unzip inference_data.zip

OR

Extract detections and sample surface points

Surface samples

First, flip the normals of ITODD object 18, which is inside out.

Then remove invisible parts of the objects

$ python -m surfemb.scripts.misc.surface_samples_remesh_visible [dataset] 

sample points evenly from the mesh surface

$ python -m surfemb.scripts.misc.surface_samples_sample_even [dataset] 

and recover the normals for the sampled points.

$ python -m surfemb.scripts.misc.surface_samples_recover_normals [dataset] 

Detection results

Download CosyPose in the same directory as SurfEmb was downloaded in, install CosyPose and follow their guide to download their BOP-trained detection results. Then:

$ python -m surfemb.scripts.misc.load_detection_results [dataset]

Inference inspection

To see pose estimation examples on the training images run

$ python -m surfemb.scripts.infer_debug [model_path] --device [device]

[device] could for example be cuda:0 or cpu.

Add --real to use the test images with simulated crops based on the ground truth poses, or further add --detections to use the CosyPose detections.

Inference for BOP evaluation

Inference is run on the (real) test images with CosyPose detections:

$ python -m surfemb.scripts.infer [model_path] --device [device]

Pose estimation results are saved to data/results.
To obtain results with depth (requires running normal inference first), run

$ python -m surfemb.scripts.infer_refine_depth [model_path] --device [device]

The results can be formatted for BOP evaluation using

$ python -m surfemb.scripts.misc.format_results_for_eval [poses_path]

Either upload the formatted results to the BOP Challenge website or evaluate using the BOP toolkit.

Extra

Custom dataset: Format the dataset as a BOP dataset and put it in data/bop.