Skip to content

Latest commit

 

History

History
50 lines (34 loc) · 2.5 KB

README.md

File metadata and controls

50 lines (34 loc) · 2.5 KB

ODMS

ODMS is the first dataset for learning Object Depth via Motion and Segmentation. ODMS training data are configurable and extensible, with each training example consisting of a series of object segmentation masks, camera movement distances, and ground truth object depth. As a benchmark evaluation, we also provide four ODMS validation and test sets with 15,650 examples in multiple domains, including robotics and driving. In our paper, we use an ODMS-trained network to perform object depth estimation in real-time robot grasping experiments, demonstrating how ODMS is a viable tool for 3D perception from a single RGB camera.

Contact: Brent Griffin (griffb at umich dot edu)

Publication

Please cite our paper if you find it useful for your research.

@inproceedings{GrCoECCV20,
  author = {Griffin, Brent A. and Corso, Jason J.},
  booktitle={The European Conference on Computer Vision (ECCV)},
  title = {Learning Object Depth from Camera Motion and Video Object Segmentation},
  year = {2020}
}

Quick Introduction

ECCV 2020 Supplementary Video: https://youtu.be/c90Fg_whjpI

IMAGE ALT TEXT HERE

Using ODMS

Run ./demo/demo_datagen.py to generate random ODMS data to train your model.
Example training data configurations are provided in the ./config/ folder. Has the option to save a static dataset.
[native Python, has scipy dependency]

Run ./demo/demo_dataset_eval.py to evaluate your model on the ODMS validation and test sets.
Provides an example evaluation for the VOS-DE baseline. Results are saved in the ./results/ folder.
[native Python, VOS-DE baseline has skimage dependency]

Benchmark

Method Robot Driving Normal Perturb All
ODNlr 13.1 31.7 8.6 17.9 17.8
VOS-DE 32.6 36.0 7.9 33.6 27.5

Is your technique missing although it's published and the code is public? Let us know and we'll add it.

Method

ECCV 2020 Presentation: https://youtu.be/ZD4Y4oQbdks

IMAGE ALT TEXT HERE

Use

This code is available for non-commercial research purposes only.