This is the official repository for Hawk.
Jiaqi Tang^, Hao Lu^, Ruizheng Wu, Xiaogang Xu, Ke Ma, Cheng Fang,
Bin Guo, Jiangbo Lu, Qifeng Chen and Ying-Cong Chen*
^: Equal contribution. *: Corresponding Author.
-
🚩 Current VAD systems are often limited by their superficial semantic understanding of scenes and minimal user interaction.
-
🚩 Additionally, the prevalent data scarcity in existing datasets restricts their applicability in open-world scenarios.
- ✅ Step 26, 2024 - Hawk is accepted by NeurIPS 2024.
- ✅ July 29, 2024 - We release the dataset of Hawk. Check this Google Cloud link for DOWNLOAD.
-
DOWNLOAD all video datasets for their original sources.
-
Google Drive Link to DOWNLOAD our annotations.
-
Data Structure: each folder contains one annotation file (e.g. CUHK Avenue, DoTA, etc.). The
All_Mix
directory contains all of the datasets in training and testing. -
The dataset is organized as follows:
data ├── All_Mix │ ├── all_videos_all.json │ ├── all_videos_test.json │ └── all_videos_train.json │ ├── CUHK_Avenue │ └── Avenue.json ├── DoTA │ └── DoTA.json ├── Ped1 │ ├── ... ├── ... └── UCF_Crime └── ...
Note:the data path should be redefined.
The following is a BibTeX reference:
@inproceedings{atang2024hawk,
title = {Hawk: Learning to Understand Open-World Video Anomalies},
author = {Tang, Jiaqi and Lu, Hao and Wu, Ruizheng and Xu, Xiaogang and Ma, Ke and Fang, Cheng and Guo, Bin and Lu, Jiangbo and Chen, Qifeng and Chen, Ying-Cong},
year = {2024},
booktitle = {Neural Information Processing Systems (NeurIPS)}
}
If you have any questions, please feel free to send email to jtang092@connect.hkust-gz.edu.cn
.