The overall goal of this project is to build a word recognizer for American Sign Language video sequences, demonstrating the power of probabalistic models. In particular, this project employs hidden Markov models (HMM's) to analyze a series of measurements taken from videos of American Sign Language (ASL) collected for research (see the RWTH-BOSTON-104 Database). In this video, the right-hand x and y locations are plotted as the speaker signs the sentence.
This project requires Python 3 and the following Python libraries installed:
Notes:
- It is highly recommended that you install the Anaconda distribution of Python and load the environment included in the "Your conda env for AI ND" lesson.
- The most recent development version of hmmlearn, 0.2.1, contains a bugfix related to the log function, which is used in this project. In order to install this version of hmmearn, install it directly from its repo with the following command from within your activated Anaconda environment:
pip install git+https://github.com/hmmlearn/hmmlearn.git
import numpy as np
import pandas as pd
from asl_data import AslDb
asl = AslDb() # initializes the database
asl.df.head() # displays the first five rows of the asl database, indexed by video and frame
left-x | left-y | right-x | right-y | nose-x | nose-y | speaker | ||
---|---|---|---|---|---|---|---|---|
video | frame | |||||||
98 | 0 | 149 | 181 | 170 | 175 | 161 | 62 | woman-1 |
98 | 1 | 149 | 181 | 170 | 175 | 161 | 62 | woman-1 |
98 | 2 | 149 | 181 | 170 | 175 | 161 | 62 | woman-1 |
98 | 3 | 149 | 181 | 170 | 175 | 161 | 62 | woman-1 |
98 | 4 | 149 | 181 | 170 | 175 | 161 | 62 | woman-1 |