Skip to content

OSU-STARLAB/ShiftableContext

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Shiftable Context: Addressing Training-Inference Context Mismatch in Simultaneous Speech Translation

This repository is a fork of https://github.com/pytorch/fairseq containing the supplementary code used in our ICML 2023 paper Shiftable Context: Addressing Training-Inference Context Mismatch in Simultaneous Speech Translation. Our code modifications to the Augmented Memory Transformer that applies our Shiftable Context are provided in the file fairseq/models/speech_to_text/modules/augmented_memory_attention.py.

If you use this code, please consider citing our paper.

The data preparation script for the MuST-C dataset we used in our paper is examples/speech_to_text/prep_mustc_data.py.

The script we used to run the ASR pretraining experiments on a single GPU for the ICML 2023 paper is the following:

fairseq-train ${data_dir} \
    --config-yaml config_asr.yaml --train-subset train_asr --valid-subset dev_asr \
    --save-dir ${save_dir} --num-workers 2 --max-tokens 80000 --max-update 100000 \
    --task speech_to_text --criterion label_smoothed_cross_entropy \
    --arch convtransformer_augmented_memory --optimizer adam --adam-betas [0.9,0.98] --lr 0.0007 --lr-scheduler inverse_sqrt \
    --simul-type waitk_fixed_pre_decision --criterion label_smoothed_cross_entropy --fixed-pre-decision-ratio 8 --waitk-lagging 1 \
    --warmup-updates 4000 --warmup-init-lr 0.0001 --clip-norm 10.0 --seed 3 --update-freq 4 \
    --ddp-backend legacy_ddp \
    --log-interval 50 \
    --segment-size 64 --right-context 0 --left-context 0 --max-memory-size 3 \
    --encoder-normalize-before --decoder-normalize-before --max-relative-position 16 \
    --patience 5 --keep-last-epochs 5 \

In the script, ${data_dir} refers to the directory of the prepared dataset, and ${save_dir} refers to the directory to save the model checkpoints.

Similarly, the script used to run the SimulST pretraining experiments on a single GPU for the ICML 2023 paper is the following:

fairseq-train ${data_dir} \
    --task speech_to_text --config-yaml config_st.yaml --train-subset train_st --valid-subset dev_st \
    --save-dir ${save_dir} \
    --load-pretrained-encoder-from ${pre_train_dir}/checkpoint_average.pt \
    --arch convtransformer_augmented_memory \
    --simul-type waitk_fixed_pre_decision --criterion label_smoothed_cross_entropy --fixed-pre-decision-ratio 8 --waitk-lagging 1 \
    --max-tokens 80000 --num-workers 2 --update-freq 4 \
    --optimizer adam --adam-betas [0.9,0.98] --lr 0.00035 --lr-scheduler inverse_sqrt --warmup-updates 7500 --warmup-init-lr 0.0001 --clip-norm 10.0 \
    --max-update 100000 --seed 4 --ddp-backend legacy_ddp --log-interval 50 \
    --segment-size 64 --right-context 0 --left-context 0 --max-memory-size 3 \
    --encoder-normalize-before --decoder-normalize-before --max-relative-position 16 \
    --attention-dropout 0.2 --activation-dropout 0.2 --weight-decay 0.0001 \
    --patience 10 --keep-last-epochs 10 \

The checkpoint averaging script we used to average model checkpoints after ASR and SimulST training is scripts/average_checkpoints.py.

The data preparation script we used to prepare our test set is examples/speech_to_text/seg_mustc_data.py.

Our shiftable context was applied during SimulST inference using SimulEval version 1.0.1 by running the following script:

simuleval \
    --agent $fairseq_dir/fairseq/examples/speech_to_text/simultaneous_translation/agents/fairseq_simul_st_agent.py \
    --source ${source} \
    --target ${target} \
    --data-bin ${data_dir} \
    --config config_st.yaml \
    --port 1227 --gpu \
    --model-path ${save_dir}/checkpoint_average.pt \
    --output ${output_dir} \
    --scores --change-model --shift-right-context --shift-left-context --shift-center-context \

In the command, ${source} refers to a file with a list of audio file paths, and ${target} refers to a file with the translations of the audio files listed in ${source}. Finally, ${output_dir} is the directory to save the evaluation output.

Paper Examples

Below is the audio for the examples provided in the Appendix of our paper.

Example 1:

ted_1102_4.mp4

Example 2:

ted_1378_241.mp4

Example 3:

ted_1166_110.mp4

Example 4:

ted_1131_67.mp4

Citation

@misc{raffel2023shiftable,
      title={Shiftable Context: Addressing Training-Inference Context Mismatch in Simultaneous Speech Translation}, 
      author={Matthew Raffel and Drew Penney and Lizhong Chen},
      year={2023},
      eprint={2307.01377},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

Below, is the original README file.



MIT License Latest Release Build Status Documentation Status


Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks.

We provide reference implementations of various sequence modeling papers:

List of implemented papers

What's New:

Previous updates

Features:

We also provide pre-trained models for translation and language modeling with a convenient torch.hub interface:

en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de.single_model')
en2de.translate('Hello world', beam=5)
# 'Hallo Welt'

See the PyTorch Hub tutorials for translation and RoBERTa for more examples.

Requirements and Installation

  • PyTorch version >= 1.5.0
  • Python version >= 3.6
  • For training new models, you'll also need an NVIDIA GPU and NCCL
  • To install fairseq and develop locally:
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable ./

# on MacOS:
# CFLAGS="-stdlib=libc++" pip install --editable ./

# to install the latest stable release (0.10.x)
# pip install fairseq
  • For faster training install NVIDIA's apex library:
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" \
  --global-option="--deprecated_fused_adam" --global-option="--xentropy" \
  --global-option="--fast_multihead_attn" ./
  • For large datasets install PyArrow: pip install pyarrow
  • If you use Docker make sure to increase the shared memory size either with --ipc=host or --shm-size as command line options to nvidia-docker run .

Getting Started

The full documentation contains instructions for getting started, training new models and extending fairseq with new model types and tasks.

Pre-trained models and examples

We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below, as well as example training and evaluation commands.

We also have more detailed READMEs to reproduce results from specific papers:

Join the fairseq community

License

fairseq(-py) is MIT-licensed. The license applies to the pre-trained models as well.

Citation

Please cite as:

@inproceedings{ott2019fairseq,
  title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling},
  author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli},
  booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations},
  year = {2019},
}

About

No description, website, or topics provided.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages