Skip to content

mnjm/facial-landmarks-cnn

Repository files navigation

Facial Landmark Detection using CNN

Here is a quick demo

8 Points with Face alignment / Head Pose

Demo with hs

6 Points

Demo

Prerequisites

TensorFlow NumPy OpenCV

Datasets

Sources
300-VW
300-W
AFW
HELEN
IBUG
LFPW
  • Some datasets mentioned above came pre-split into training and test sets. Others I manually split them randomly.
  • Prepared dataset is stored and served to the model as tfrecord files in sharded fashion

Datasets file structure

 { dataset }
 ├── testset
 │   ├── {img}.(jpg|png)
 │   ├── {img}.pts
 └── trainset
     ├── {img}.(jpg|png)
     ├── {img}.pts

300-VW

Since this dataset is composed of (.avi) video files (unlike the others), It should be served in this format.

300VW
├── testset
│   ├── {sample name}
│   │   ├── annot / {frame_no}.pts
│   │   └── vid.avi
└── trainset
    ├── {sample name}
    │   ├── annot / {frame_no}.pts
    │   └── vid.avi
frame_no format "%06d"

Note: Dataset base dir name 300VW, this is hardcoded in prep_tfrecords.py.

Generate TFRecord files

Provide the dataset directory and the path to store the TFRecord files

python prep_tfrecords.py <dataset_loc> <tfrecord_save_loc> (--test_set) --n_points <6 (or) 8>

Training

Command to train the model.

python train.py <model_type> \
 --n_points <6 (or) 8> \
 --tfrecords_dir <tfrecords_dir> \
 --load_from <best_checkpoint_to_start_from> \ # This can be skipped
 --epochs 10 --batch_size 1024 --learning_rate 0.001
  • Check here for model types

Evaluate

python train.py <model_type> \
 --n_points <6 (or) 8> \
 --tfrecords_dir <tfrecords_dir> \
 --load_from <model_checkpoint_to_eval> \
 --eval_model

Export

Best model can be exported to Keras native .keras format

python train.py <model_type> \
 --load_from <checkpoint_to_export> \
 --export_model <export_as>

Visual Test

To visually test the model on a video file or directory containing images, run the below command

python visual_test.py <exported_model_file> <avi_(or)_dir_loc> (--save_video) --n_points <6 (or) 8>

--save_video will save the visual output to output.mp4

Visual Test with Face alignment / Head Pose

Only works on 8pts

To visually test the model on a video file or directory containing images, run the below command

python visual_test.py <exported_model_file> <avi_(or)_dir_loc> (--save_video) --n_points 8 --draw_headpose

Addendum

License

License

About

Facial Landmark Detection using a Simple CNN

Topics

Resources

License

Stars

Watchers

Forks

Languages