Skip to content

Commit

Permalink
rename
Browse files Browse the repository at this point in the history
  • Loading branch information
breimers committed Apr 21, 2020
1 parent dfb843a commit 58d61d7
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 8 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
# IntrospectData's OpenSource FER Application
# Facial Expression Recognition

Give the function an input and it will return a dictionary of detected faces and emotion predictions.
Give the function an input and it will return a dictionary of detected faces and expression predictions.


---

## About

This is a python3 utility for Facial Detection/Emotion Recognition (FER) using Keras and OpenCV.
This is a python3 cli for Facial Expression Recognition (FER) using Keras and OpenCV.

This project uses the [haarcascade](https://github.com/opencv/opencv/blob/master/data/haarcascades/haarcascade_frontalface_default.xml) xml for facial detection.

Expand Down
10 changes: 5 additions & 5 deletions fer_capture/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
log = logging.getLogger("fer-capture-log")

#global values
emotion_dict = {0: "Angry", 1: "Disgust", 2: "Fear", 3: "Happy", 4: "Sad", 5: "Surprise", 6: "Neutral"}
expression_dict = {0: "Angry", 1: "Disgust", 2: "Fear", 3: "Happy", 4: "Sad", 5: "Surprise", 6: "Neutral"}
face_casc = os.path.join(os.path.dirname(__file__), "haarcascade_frontalface_default.xml")

default_model_path = wget.download("https://storage.googleapis.com/id-public-read/model.h5")
Expand Down Expand Up @@ -56,13 +56,13 @@ def face_check(img, model, show=False):
model (model): Keras model.
Returns:
data (dict): Dictionary containing detected faces along with the predicted emotions.
data (dict): Dictionary containing detected faces along with the predicted expressions.
"""
#begin analysis
frame = img
gray, faces = detect_faces(frame)
data = {"faces" : []}
log.info("Beginning emotion recognition.")
log.info("Beginning expression recognition.")

for i, (x, y, w, h) in enumerate(faces):
roi_gray = gray[y:y + h, x:x + w]
Expand Down Expand Up @@ -119,7 +119,7 @@ def check_image(image_path, model_path=default_model_path, show=False):
show (bool): Whether or not to display the media being analyzed.
Returns:
data (dict): Dictionary containing detected faces along with the predicted emotions.
data (dict): Dictionary containing detected faces along with the predicted expressions.
"""
img = path_to_img(image_path)
model = tf.keras.models.load_model(model_path)
Expand All @@ -136,7 +136,7 @@ def check_stream(input=0, model_path=default_model_path, show=False):
when filepath: stream from videofile.
show (bool): Whether or not to display the media being analyzed.
Returns:
data (list/dict): List of dictionaries containing detected faces along with the predicted emotions from each frame.
data (list/dict): List of dictionaries containing detected faces along with the predicted expressions from each frame.
"""
cap = cv2.VideoCapture(input)
model = tf.keras.models.load_model(model_path)
Expand Down

0 comments on commit 58d61d7

Please sign in to comment.