Skip to content

jbantony/yolov5-Object-Counter

Repository files navigation

Object Counter Based on YOLO v5

This project is modified from the official YOLOv5 by Ultralytics to perform realtime Object Counting Task from the detected objects in the frame.

This modifies detect.py script will print the count of all the detected objects (using --print_all flag) as well as individual object (using --print_class "person") in the detected image/frame.

Notes for Running this detection script from detect.py

Use the flags
  • --nosave : Not to save the recorded video while feeding from Webcam

  • --line-thickness : Keep clean bounding boxes use 1

  • --source 0 : To read from WebCam

  • --print_class : To print the number of detected objects in the frame for the given class eg: --print_class "cell phone"

  • --print_all : To print all the detected classes in the image

Optional for Good results
  • --imgsz : To get more accurate results, use imgsz 800

Example Usage

python detect.py --source 0 --imgsz 800 --line-thickness 1 --print_class person --nosave

or

python detect.py --source 0 --line-thickness 1 --print_all --nosave

Quit the running

Quit using STRG + C while running.

Docker Implementation

Build the image

docker build -t yolov5 .

This will build a docker image with a pre-lauched jupyter notebook at port 8888

Run docker

docker run --rm -it -p 90:8888 -v ${PWD}:/yolo/ --name yolo5 -d yolov5

Access docker

docker exec -ti yolo5 bash

Test the docker with Jupyter Notebook

jupyter notebook --allow-root --notebook-dir=/yolo/ --ip=0.0.0.0 --port=8888 --no-browser

This will run jupyter notebook in root mode with the given directory as default in the docker image. The notebook can be accessed at http://localhost Enter the password root to access the notebook.

Keeping the necessary parts from the original YOLOv5 readme.

Highlights from Original Readme

YOLOv5 🚀 is a family of object detection architectures and models pretrained on the COCO dataset, and represents Ultralytics open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.

Documentation

See the YOLOv5 Docs for full documentation on training, testing and deployment.

Quick Start Examples

Install

Clone repo and install requirements.txt in a Python>=3.7.0 environment, including PyTorch>=1.7.

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install
Inference with detect.py

detect.py runs inference on a variety of sources, downloading models automatically from the latest YOLOv5 release and saving results to runs/detect.

python detect.py --source 0  # webcam
                          img.jpg  # image
                          vid.mp4  # video
                          path/  # directory
                          'path/*.jpg'  # glob
                          'https://youtu.be/Zgi9g1ksQHc'  # YouTube
                          'rtsp://example.com/media.mp4'  # RTSP, RTMP, HTTP stream

Train

YOLOv5 classification training supports auto-download of MNIST, Fashion-MNIST, CIFAR10, CIFAR100, Imagenette, Imagewoof, and ImageNet datasets with the --data argument. To start training on MNIST for example use --data mnist.

# Single-GPU
python classify/train.py --model yolov5s-cls.pt --data cifar100 --epochs 5 --img 224 --batch 128

# Multi-GPU DDP
python -m torch.distributed.run --nproc_per_node 4 --master_port 1 classify/train.py --model yolov5s-cls.pt --data imagenet --epochs 5 --img 224 --device 0,1,2,3

Val

Validate YOLOv5m-cls accuracy on ImageNet-1k dataset:

bash data/scripts/get_imagenet.sh --val  # download ImageNet val split (6.3G, 50000 images)
python classify/val.py --weights yolov5m-cls.pt --data ../datasets/imagenet --img 224  # validate

Predict

Use pretrained YOLOv5s-cls.pt to predict bus.jpg:

python classify/predict.py --weights yolov5s-cls.pt --data data/images/bus.jpg
model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s-cls.pt')  # load from PyTorch Hub

Export

Export a group of trained YOLOv5s-cls, ResNet and EfficientNet models to ONNX and TensorRT:

python export.py --weights yolov5s-cls.pt resnet50.pt efficientnet_b0.pt --include onnx engine --img 224