You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We help the deaf and the dumb to communicate with normal people using hand gesture to speech conversion. In this code we use depth maps from the kinect camera and techniques like convex hull + contour mapping to recognise 5 hand signs
Dataset for patch-based person classification (person vs. non-person objects) and posture classification (standing vs. sitting vs. squatting). The data was recorded using a Kinect2 sensor and consists of labeled depth image patches of 27 persons in various postures and of various non-person objects. In total, the dataset consists of more than 23…
A virtual fitting room using Kinect v2 with gesture controlled GUI. The project aims to map any cloth images downloaded from the internet on a person standing in front of the Kinect Sensor
This is a project where in an autonomous robot is built which makes use of the 3D mapping technique to map the entire environment and finally is able to navigate autonomously toward the detected switch board. This project was developed with the basic idea of making the robot to be able to navigate toward the switch board whenever it needs to be …
Data science project which inspects possible useful properties of motion-captured skateboarder performing so called "Ollie". Data is gathered using Kinect v2 body+floor frames.