This repository includes the implementation of the paper -- MobiP: A Lightweight model for Driving Perception using MobileNet
Mobip is a lightweight multi-task network that can simultaneously perform traffic object detection, drivable area segmentation, and lane line detection. The model achieves an inference speed of 58 FPS on NVIDIA Tesla V100 while still maintaining competitive performance on all three tasks compared to other multi-task networks.
This code was based on python version 3.7, PyTorch 1.7+ and torchvision 0.8+:
conda install pytorch==1.7.0 torchvision==0.8.0 cudatoolkit=10.2 -c pytorch
See requirements.txt
for additional dependencies.
pip install -r requirements.txt
Please follow the instructions in this link to download the BDD100K dataset. Also, overwrite your path to the dataset on the DATASET related params in ./lib/config/default.py
Check the configuration in ./lib/config/default.py
and start training:
python tools/train.py
Multi GPU mode:
python -m torch.distributed.launch --nproc_per_node=N tools/train.py # N: the number of GPUs
The repository have provided a checkpoint of our trained model for demonstration.
python tools/test.py --weights Checkpoints/model.pth
The implementation of Mobip was based on YOLOP and HybridNet. The authors would like to thank for their help.