@inproceedings{seon2023stop,
author = {Seon, Jonghyeon and Hwang, Jaedong and Mun, Jonghwan and Han, Bohyung},
title = {Stop or Forward: Dynamic Layer Skipping for Efficient Action Recognition},
booktitle = {WACV},
year = {2023},
}
Our experiments are conducted on 4 Titan XP (48GB):
conda env create -n sof -f ./sofnet_env.yml
conda activate sofnet
pip install tensorboardX thop
- Move the ActivityNet-v1.3 train/test splits (and classes file) from
/data
to/foo/bar/activity-net-v1.3
. Here/foo/bar
is your directory to save the datasets. - Download ActivityNet-v1.3 videos from here (contact them if there is any missing video) and save to
/foo/bar/activity-net-v1.3/videos
- Extract frames using the script from the repository:
cd ./ops
python video_jpg.py /foo/bar/activity-net-v1.3/videos /foo/bar/activity-net-v1.3/frames --parallel
The frames will be saved to /foo/bar/activity-net-v1.3/frames
.
To test the models on ActivityNet-v1.3, run:
sh sof_train.sh
This might take around 1~2 day.
To test the models on ActivityNet-v1.3, run:
sh sof_test.sh
Our code is based on AR-Net