This is the repository containing the solution for FG-2020 ABAW Competition
Pretrained models can be downloaded through this link under the Multitask-CNN
and Multitask-CNN-RNN
folders.
We aim for a unifed model to solve three tasks: Facial Action Units (FAU) prediction, Facial Expression (7 basic emotions) prediction, Valence and Arousal prediction. For abbreviation, we refer to them as FAU, EXPR and VA.
UPDATES: The challenge leaderboard has been released. Our solution won two challenege tracks (FAU and VA) among six teams!
To run such a demo, make sure you have installed the requirements listed in Requirements in addition to MTCNN and downloaded all pretrained weights. Afterwards, you can modify the video_file
in emotion_demo.py
to the video you want to process, and then run python emotion_demo.py
. The output video will be saved under the save_dir
.
Before training, we change the data distribution of experiment datasets by (1) importing external datasets, such as the DISFA dataset for FAU, the ExpW dataset for EXPR, and the AFEW-VA dataset for VA; (2) resampling the minority class and the majority class. Our purpose is to create a more balanced data distribution for each individual class.
This the data disribution of the Aff-wild2 dataset, the DISFA dataset and the merged dataset. We resampled the merged dataset using ML-ROS, which is short for Multilabel Randomly Oversampling
This the data distribution of the Aff-wild2 dataset, the ExpW dataset and the merged dataset. We resample the merged dataset to ensure the instances of each class have the same probability of appearing in one epoch.
This the data distribution of the Aff-wild2 dataset, the AFEW-VA dataset and the merged dataset. We discretize the continuous valence/arousal scores in [-1, 1] into 20 bins of the same width. We treat each bin as a category, and apply the oversampling/undersampling strategy.
For the current datasets, each dataset only contain one type of labels (FAU, EXPR or VA). Therefore we propose an algorithm for a deep neural network to learn multitask from partial labels. The algorithm has two steps: firstly, we train a teacher model to perform all three tasks, where each instance is trained by the ground truth label of its corresponding task. Secondly, we refer to the outputs of the teacher model as the soft labels. We use the soft labels and the ground truths to train the student model.
This is the diagram for our proposed algorithm. Given the input images of three tasks and the ground truths of three tasks , we first train the teacher model using the teacher loss between the teacher outputs and the ground truth . Secondly, we train the student model using the student loss which consists of two parts: one is calcaluted from the teacher outputs and the student outputs , another is calculated from the ground truth and the student outputs .
- Pytorch 1.3.1 or higher version
- Numpy
- pytorch benchmark
- pandas, pickle, matplotlib
Before training our models on the Aff-wild2 dataset, we loaded the pretrained weights on other emotion datasets, such as FER2013 dataset. This is part of pytorch benchmark repository but you need to download them manually. To download these pretrained weights, we provide this link where you can download the three folders, fer+
, fer
, and sfew
. Please save the three folders under pytorch_benchmarks/models
.
You can download weights of the multitask CNN and multitask CNN-RNN pretrained on the Aff-wild2 by running:
bash download_pretrained_weights_aff_wild2.sh
-
Download all required datasets, crop and align face images;
-
Create the annotation files for each dataset, using the script in
create_annotation_file
directory; -
Change the annotation file paths in the
Multitask-CNN(Multitask-CNN-RNN)/PATH/__init__.py
; -
Training: For Multitask-CNN, run
python train.py --force_balance --name image_size_112_n_students_5 --image_size 112 --pretrained_teacher_model path-to-teacher-model-if-exists
, the argumentname
is experiment name (save path), the--force_balance
will make the sampled dataset more balanced.
For Multitask-CNN-RNN, runpython train.py --name image_size_112_n_students_5_seq_len=32 --image_size 112 --seq_len 32 --frozen --pretrained_resnet50_model path-to-the-pretrained-Multitask-CNN-model --pretrained_teacher_model path-to-teacher-model-if-exists
-
Validation: Run the
python val.py --name image_size_112_n_students_5 --image_size 112 --teacher_model_path path-to-teacher-model --mode Validation --ensemble
for Multitask-CNN, and runpython val.py --name image_size_112_n_students_5_seq_len=32 --image_size 112 --teacher_model_path path-to-teacher-model --pretrained_resnet50_model path-to-the-pretrained-Multitask-CNN-model --mode Validation --ensemble --seq_len 32
for Multitask-CNN-RNN. -
From the results on the validation set, we obtain the best AU thresholds on the validation set.
Modify this linebest_thresholds_over_models = []
in thetest.py
to the best thresholds on the validation set. -
Testing: run
python test.py --name image_size_112_n_students_5 --image_size 112 --teacher_model_path path-to-teacher-model --mode Test --save_dir Predictions --ensemble
for Multitask-CNN, and runpython test.py --name image_size_112_n_students_5_seq_len=32 --image_size 112 --teacher_model_path path-to-teacher-model --pretrained_resnet50_model path-to-the-pretrained-Multitask-CNN-model --mode Test --ensemble --seq_len 32
for Multitask-CNN-RNN.
-
Download the pretrained CNNs and unzip them.
-
Crop and align face images, save them to a directory.
-
For CNN model:
python run_pretrained_model.py --image_dir directory-containing-sequence-of-face-images --model_type CNN --batch_size 12 --eval_with_teacher --eval_with_students --save_dir save-directory --workers 8 --ensemble
. For CNN-RNN model:python run_pretrained_model.py --image_dir directory-containing-sequence-of-face-images --model_type CNN-RNN --seq_len 32 --batch_size 6 --eval_with_teacher --eval_with_students --save_dir save-directory --workers 8 --ensemble
@inproceedings{deng2020multitask,
title={Multitask Emotion Recognition with Incomplete Labels},
author={Deng, Didan and Chen, Zhaokang and Shi, Bertram E},
booktitle={2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020)(FG)},
pages={828--835},
organization={IEEE Computer Society}
}