Zengqun Zhao, Qingshan Liu. "Former-DFER: Dynamic Facial Expression Recognition Transformer". ACM International Conference on Multimedia.
conda install pytorch==1.8.1 torchvision==0.9.1 torchaudio==0.8.1 cudatoolkit=10.2 -c pytorch
- Step 1: download DFEW dataset.
- Step 2: fill in all the your_dataset_path in
script.py
, then runscript.py
. - Step 3: run
sh main_DFEW_trainer.sh
The trained models on DFER (fd1, fd2, fd3, fd4, fd5) can be downloaded here (Google Driver).
Recently, a new dynamic FER dataset named FERV39k is proposed, the results of the Former-DFER on FERV39k are as follows:
Happiness | Sadness | Neutral | Anger | Surprise | Disgust | Fear | UAR | WAR |
---|---|---|---|---|---|---|---|---|
67.57 | 44.16 | 51.81 | 48.93 | 25.09 | 10.80 | 9.80 | 36.88 | 45.72 |
If you find our work useful, please consider citing our paper:
@inproceedings{zhao2021former,
title={Former-DFER: Dynamic Facial Expression Recognition Transformer},
author={Zhao, Zengqun and Liu, Qingshan},
booktitle={Proceedings of the 29th ACM International Conference on Multimedia},
pages={1553--1561},
year={2021}
}