This project includes scripts for processing videos and counting objects using the YOLO8v models.
output_video.mp4
Result:
{
"laptop": 2,
"bottle": 2,
"cell phone": 2,
"chair": 1,
"car": 1,
"keyboard": 1
}
Frontend:
git clone https://github.com/dev02chandan/Yolo8-Object-Detection-and-Counting.git
cd Yolo8-Object-Detection-and-Counting
python -m venv venv
.\venv\Scripts\activate
cd Yolo8-Object-Detection-and-Counting
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
Use the Frontend:
streamlit run src/app.py
and go to the localhost link provided in the terminal.
OR
Run directly in terminal:
python src/main.py --video_path "videos/video1.mp4" --model_path "yolov8m.pt" --classes_to_count 39 67 63 56 2 66
This will process the video by reducing its frame rate, detecting, and counting objects, and then outputting the results in a new directory within runs/ containing the count in a JSON file and the processed video.
OR
Directly use the code in colab:
Follow the Roboflow notebook below, and train Yolov8 model on custom objects. Steps to create the dataset and labelling are also in the notebook below:
NOTE: When you train on new objects, the Yolo model will forget the old objects. Your dataset should include all the objects that you want to train the model for. This is called Catastrophic Forgetting. Overcoming this is called Continual Learning and this area is still a field of research.
- Duplicate items were counted, due to reflections, or circular / shaky movement of camera. Avoiding such things while taking videos can improve results.
- Giving the exact classes that you want to count improves the results.
- Low confidence duplicate items are frequently detected. You can increase or decrease the confidence as per your requirements (line 66 in object_counting.py)
Special Thanks to Prof. Kapil Rathore Sir.