Skip to content

Official pytorch repository for CG-DETR "Correlation-guided Query-Dependency Calibration in Video Representation Learning for Temporal Grounding"

License

Notifications You must be signed in to change notification settings

wjun0830/CGDETR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

18 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

CG-DETR : Calibrating the Query-Dependency of Video Representation via Correlation-guided Attention for Video Temporal Grounding

Correlation-Guided Query-Dependency Calibration for Video Temporal Grounding

WonJun Moon, Sangeek Hyun, SuBeen Lee, Jae-Pil Heo
Sungkyunkwan University

πŸ₯‡PWC
πŸ₯‡PWC
πŸ₯‡PWC
πŸ₯‡PWC
πŸ₯‡PWC
πŸ₯‡PWC

πŸ”– Abstract

Recent endeavors in video temporal grounding enforce strong cross-modal interactions through attention mechanisms to overcome the modality gap between video and text query. However, previous works treat all video clips equally regardless of their semantic relevance with the text query in attention modules. In this paper, our goal is to provide clues for query-associated video clips within the crossmodal encoding process. With our Correlation-Guided Detection Transformer~(CG-DETR), we explore the appropriate clip-wise degree of cross-modal interactions and how to exploit such degrees for prediction. First, we design an adaptive cross-attention layer with dummy tokens. Dummy tokens conditioned by text query take a portion of the attention weights, preventing irrelevant video clips from being represented by the text query. Yet, not all word tokens equally inherit the text query's correlation to video clips. Thus, we further guide the cross-attention map by inferring the fine-grained correlation between video clips and words. We enable this by learning a joint embedding space for high-level concepts, \textit{i.e}., moment and sentence level, and inferring the clip-word correlation. Lastly, we use a moment-adaptive saliency detector to exploit each video clip's degrees of text engagement. We validate the superiority of CG-DETR with the state-of-the-art results on various benchmarks for both moment retrieval and highlight detection.


πŸ“’ To be updated

Todo

  • : Upload instruction for dataset download
  • : Update model zoo
  • : Upload implementation

πŸ“‘ Datasets

QVHighlights : Download official feature files for QVHighlights dataset from moment_detr_features.tar.gz (8GB).

tar -xf path/to/moment_detr_features.tar.gz

If inaccessible, then download from

QVHighlight 9.34GB.

For other datasets, we provide extracted features:

Charades-STA 33.18GB. (Including SF+C and VGG features)
TACoS 290.7MB.
TVSum 69.1MB.
Youtube 191.7MB.

After downloading, either prepare the data directory as below or change 'feat_root' in TVSum shell files under 'cg_detr/scripts/*/'.

.
β”œβ”€β”€ CGDETR
β”‚Β Β  β”œβ”€β”€ cg_detr
β”‚Β Β  └── data
β”‚Β Β  └── results
β”‚Β Β  └── run_on_video
β”‚Β Β  └── standalone_eval
β”‚Β Β  └── utils
β”œβ”€β”€ features
 Β Β  └── qvhighlight
 Β Β  └── charades
 Β Β  └── tacos
 Β Β  └── tvsum
  Β  └── youtube_uni

πŸ› οΈ Installation

Python version 3.7 is required.

  1. Clone this repository.
git clone https://github.com/wjun0830/CGDETR.git
  1. Download the packages we used for training.
pip install -r requirements.txt

πŸš€ Training

We provide training scripts for all datasets in cg_detr/scripts/ directory.

QVHighlights Training

Training can be executed by running the shell below:

bash cg_detr/scripts/train.sh  

Best validation accuracy is yielded at the last epoch.

Charades-STA

For training, run the shell below:

bash cg_detr/scripts/charades_sta/train.sh
bash cg_detr/scripts/charades_sta/train_vgg.sh  

TACoS

For training, run the shell below:

bash cg_detr/scripts/tacos/train.sh  

TVSum

For training, run the shell below:

bash cg_detr/scripts/tvsum/train_tvsum.sh  

Best results are stored in 'results_[domain_name]/best_metric.jsonl'.

Youtube-hl

For training, run the shell below:

bash cg_detr/scripts/youtube_uni/train.sh  

Best results are stored in 'results_[domain_name]/best_metric.jsonl'.

QVHighlights w/ Pretraining Training

Training can be executed by running the shell below:

bash cg_detr/scripts/train.sh --num_dummies 45 --num_prompts 1 --total_prompts 10 --max_q_l 75 --resume pt_checkpoints/model_e0009.ckpt --seed 2018

Checkpoints for pretrained checkpoint 'model_e0009.ckpt' is available here.

πŸ‘€ QVHighlights Evaluation and Codalab Submission

Once the model is trained, hl_val_submission.jsonl and hl_test_submission.jsonl can be yielded by running inference.sh. Compress them into a single .zip file and submit the results.

bash cg_detr/scripts/inference.sh results/{direc}/model_best.ckpt 'val'
bash cg_detr/scripts/inference.sh results/{direc}/model_best.ckpt 'test'

where direc is the path to the saved checkpoint. For more details, check standalone_eval/README.md.

πŸ“Ή Others (Custom video inference / training)

  • Running predictions on customized datasets is also available. Note that only the CLIP-only trained model is available for custom video inference.
    You can either
    Β  1)Preparing your custom video and text query under 'run_on_video/example',
    Β  2)Modify the youtube video url and custom text query in 'run_on_video/run.py'
    Β  (youtube_url : video link url, [vid_st_sec, vid_ec_sec] : start and end time of the video (specify less than 150 frames), desired_query : text query)
    Then, run the following commands:`
pip install ffmpeg-python ftfy regex
PYTHONPATH=$PYTHONPATH:. python run_on_video/run.py
  • For instructions for training on custom datasets, check here.

πŸ“¦ Model Zoo

Dataset Model file
QVHighlights checkpoints
Charades (Slowfast + CLIP) checkpoints
Charades (VGG) checkpoints
TACoS checkpoints
TVSum checkpoints
Youtube-HL checkpoints
QVHighlights w/ PT (47.97 mAP) checkpoints
QVHighlights only CLIP checkpoints

πŸ“– BibTeX

If you find the repository or the paper useful, please use the following entry for citation.

@article{moon2023correlation,
  title={Correlation-guided Query-Dependency Calibration in Video Representation Learning for Temporal Grounding},
  author={Moon, WonJun and Hyun, Sangeek and Lee, SuBeen and Heo, Jae-Pil},
  journal={arXiv preprint arXiv:2311.08835},
  year={2023}
}

☎️ Contributors and Contact

If there are any questions, feel free to contact the authors: WonJun Moon (wjun0830@gmail.com), Sangeek Hyun (hse1032@gmail.com), and SuBeen Lee (leesb7426@gmail.com)

β˜‘οΈ LICENSE

The annotation files and many parts of the implementations are borrowed from Moment-DETR and QD-DETR. Our codes are under MIT license.