Skip to content

Zehnaseeb/OFT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Orthographic Feature Transform for Monocular 3D Object Detection

OFTNet-Architecture This is a PyTorch implementation of the OFTNet network from the paper Orthographic Feature Transform for Monocular 3D Object Detection. The code currently supports training the network from scratch on the KITTI dataset - intermediate results can be visualised using Tensorboard. This version of the project currrrently does not work on multiclass object detection. It only detects cars in the kitti dataset. This project does not use Lidar data to detect objects. Note also that there are some slight implementation differences from the original code used in the paper and the below cited source code. Source code

Training

The training script can be run by calling train.py with the name of the experiment as a required position argument.

python train.py name-of-experiment --gpu 0

By default data will be read from data/kitti/objects and model checkpoints will be saved to experiments. The model is trained using the KITTI 3D object detection benchmark which can be downloaded from here. See train.py for a full list of training options.

Inference

To decode the network predictions and visualise the resulting bounding boxes, run the infer.py script with the path to the model checkpoint you wish to visualise:

python infer.py --model-path /path/to/checkpoint.pth.gz --gpu 0

References

OFT 2023 RecoilSource code

@article{roddick2018orthographic,
title={Orthographic feature transform for monocular 3d object detection},
author={Roddick, Thomas and Kendall, Alex and Cipolla, Roberto},
journal={British Machine Vision Conference},
year={2019}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages