Skip to content

Latest commit

 

History

History
1 lines (1 loc) · 688 Bytes

README.md

File metadata and controls

1 lines (1 loc) · 688 Bytes

This project focuses on a human-centered approach in interacting with computer vision in controlling a powerpoint presentation. The 3 main portions in this project include object detection, motion tracking, and gesture recognition. The algorithm creates a bounding box around a user’s hand and observes the motion within the captured video to execute an action to advance to the next or the previous slide. At the end of a presentation, a user’s fist is detected to exit a presentation. Finally, from researching and implementing different methods for each portion, we’ve concluded that results are accurately generated from background subtraction, convex hulls, and Haar cascades.