Skip to content
forked from LuyuProgram/DSN

EmotiW2018, Audio-video Based Emotion Recognition Challenge

Notifications You must be signed in to change notification settings

ShuaiBai623/DSN

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Video-based Emotion Recognition Using Deeply-Supervised Neural Networks

An ensemble of proposed models achieves an accuracy of 61.10% in EmotiW 2018. Details about the EmotiW2018 Challenge can be found at: https://sites.google.com/view/emotiw2018. For more details about the codes, please refer to our paper.

Model accuracy on the validation set:

Model accuracy on the validation set

Accuracy of our top 4 submissions to EmotiW 2018:

acc


Requirements

pycaffe
python 2.7
ffpemg
opencv 2.4.11
cuda 8.0


Datasets and models

  1. Datasets:

    Two datasets we used can be downloaded from the EmotiW 2018 and Real-world Affective FacesRAF-DB. You can send an e-mail to the author to access the database.

  2. Data Pre-processing:

    (1) Extract all frames of the videos:

    cd ./scripts 
    python extract_frames.py  
    

    (2) We employe MTCNN implemented in facenet to detect faces in the frames (image size=400,margin=100).

    (3) Compile Dlib's Python interface and download shape_predictor_68_face_landmarks.dat. Get 68 landmarks of 400*400 images cropped by MTCNN.

    cd ./scripts 
    wget http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
    python getLandmarks.py
    

    (4) Face alignments from the cropped images. landmark_list.txt is a list of 68 landmarks detected by Dlib; /CropData/ is the directory of 400*400 cropped images; /AlignData/ is the target directory of aligned faces.

    ./face_align  landmark_list.txt  ./CropData/  ./AlignData/
    
  3. Models:

    To train DSN-VGG-FACE , DSN-Res-50 or DenseNet-121,
    you can finetune using pretrained models from: VGG-FACE,ResNet-50,DenseNet-121.

    git clone https://github.com/EvelynFan/DSN.git  
    cd ./DSN-VGG-FACE  
    python run_dsn.py  
    cd ./DSN-Res-50  
    python run_dsn_res50.py
    

    To test on the validation set and test set of 2018 Emotion Challenge Dataset, please download our models from GoogleDrive

    cd ./scripts 
    python test_video.py  
    
  4. Model fusion:

    cd ./scripts 
    python merge_score.py
    

Citing

If you find the code useful, please cite:

@inproceedings{fan2018video,
  title={Video-based Emotion Recognition Using Deeply-Supervised Neural Networks},
  author={Fan, Yingruo and Lam, Jacqueline CK and Li, Victor OK},
  booktitle={Proceedings of the 2018 on International Conference on Multimodal Interaction},
  pages={584--588},
  year={2018},
  organization={ACM}
}

About

EmotiW2018, Audio-video Based Emotion Recognition Challenge

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%