Skip to content

OSU-STARLAB/ImplicitMemory

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Implicit Memory Transformer for Computationally Efficient Simultaneous Speech Translation

This repository is a fork of https://github.com/pytorch/fairseq containing the supplementary code used in our ACL 2023 paper Implicit Memory Transformer for Computationally Efficient Simultaneous Speech Translation. Our code implementation is in fairseq/models/speech_to_text/modules/implicit_memory_attention.py.

If you use this code, please consider citing our paper.

The data preparation script for the MuST-C dataset we used in our paper is examples/speech_to_text/prep_mustc_data.py.

The script we used to run the ASR pretraining experiments on a single GPU for the ACL 2023 paper is the following:

fairseq-train ${data_dir} \
    --config-yaml config_asr.yaml --train-subset train_asr --valid-subset dev_asr \
    --save-dir ${save_dir} --num-workers 2 --max-tokens 80000 --max-update 100000 \
    --task speech_to_text --criterion label_smoothed_cross_entropy \
    --arch implicit_memory_transformer --optimizer adam --adam-betas [0.9,0.98] --lr 0.0007 --lr-scheduler inverse_sqrt \
    --simul-type waitk_fixed_pre_decision --criterion label_smoothed_cross_entropy --fixed-pre-decision-ratio 8 --waitk-lagging 1 \
    --warmup-updates 4000 --warmup-init-lr 0.0001 --clip-norm 10.0 --seed 3 --update-freq 4 \
    --ddp-backend legacy_ddp \
    --log-interval 50 \
    --segment-size 64 --right-context 32 --left-context 32 --max-relative-position 16 --left-context-method pre_output \
    --encoder-normalize-before --decoder-normalize-before --enable-left-grad \
    --patience 5 --keep-last-epochs 5 \

In the script, ${data_dir} refers to the directory of the prepared dataset, and ${save_dir} refers to the directory to save the model checkpoints.

Similarly, the script used to run the SimulST pretraining experiments on a single GPU for the ACL 2023 paper is the following:

fairseq-train ${data_dir} \
    --task speech_to_text --config-yaml config_st.yaml --train-subset train_st --valid-subset dev_st \
    --save-dir ${save_dir} \
    --load-pretrained-encoder-from ${pre_train_dir}/checkpoint_average.pt \
    --arch implicit_memory_transformer \
    --simul-type waitk_fixed_pre_decision --criterion label_smoothed_cross_entropy --fixed-pre-decision-ratio 8 --waitk-lagging 1 \
    --max-tokens 40000 --num-workers 1 --update-freq 8 \
    --optimizer adam --adam-betas [0.9,0.98] --lr 0.00035 --lr-scheduler inverse_sqrt --warmup-updates 7500 --warmup-init-lr 0.0001 --clip-norm 10.0 \
    --max-update 100000 --seed 4 --ddp-backend legacy_ddp --log-interval 50 \
    --segment-size 64 --right-context 32 --left-context 32 --max-relative-position 16 --left-context-method pre_output \
    --encoder-normalize-before --decoder-normalize-before --enable-left-grad \
    --attention-dropout 0.2 --activation-dropout 0.2 --weight-decay 0.0001 \
    --patience 10 --keep-last-epochs 10 \

The checkpoint averaging script we used to average model checkpoints after ASR and SimulST training is scripts/average_checkpoints.py.

The data preparation script we used to prepare our test set is examples/speech_to_text/seg_mustc_data.py.

We performed inference on our Implicit Memory Transformer using SimulEval(https://github.com/facebookresearch/SimulEval) version 1.0.1.

Citation

@inproceedings{raffel-chen-2023-implicit,
    title = "Implicit Memory Transformer for Computationally Efficient Simultaneous Speech Translation",
    author = "Raffel, Matthew  and
      Chen, Lizhong",
    booktitle = "Findings of the Association for Computational Linguistics: ACL 2023",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.findings-acl.816",
    pages = "12900--12907",
    abstract = "Simultaneous speech translation is an essential communication task difficult for humans whereby a translation is generated concurrently with oncoming speech inputs. For such a streaming task, transformers using block processing to break an input sequence into segments have achieved state-of-the-art performance at a reduced cost. Current methods to allow information to propagate across segments, including left context and memory banks, have faltered as they are both insufficient representations and unnecessarily expensive to compute. In this paper, we propose an Implicit Memory Transformer that implicitly retains memory through a new left context method, removing the need to explicitly represent memory with memory banks. We generate the left context from the attention output of the previous segment and include it in the keys and values of the current segment{'}s attention calculation. Experiments on the MuST-C dataset show that the Implicit Memory Transformer provides a substantial speedup on the encoder forward pass with nearly identical translation quality when compared with the state-of-the-art approach that employs both left context and memory banks.",
}

Below, is the original README file.



MIT License Latest Release Build Status Documentation Status


Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks.

We provide reference implementations of various sequence modeling papers:

List of implemented papers

What's New:

Previous updates

Features:

We also provide pre-trained models for translation and language modeling with a convenient torch.hub interface:

en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de.single_model')
en2de.translate('Hello world', beam=5)
# 'Hallo Welt'

See the PyTorch Hub tutorials for translation and RoBERTa for more examples.

Requirements and Installation

  • PyTorch version >= 1.5.0
  • Python version >= 3.6
  • For training new models, you'll also need an NVIDIA GPU and NCCL
  • To install fairseq and develop locally:
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable ./

# on MacOS:
# CFLAGS="-stdlib=libc++" pip install --editable ./

# to install the latest stable release (0.10.x)
# pip install fairseq
  • For faster training install NVIDIA's apex library:
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" \
  --global-option="--deprecated_fused_adam" --global-option="--xentropy" \
  --global-option="--fast_multihead_attn" ./
  • For large datasets install PyArrow: pip install pyarrow
  • If you use Docker make sure to increase the shared memory size either with --ipc=host or --shm-size as command line options to nvidia-docker run .

Getting Started

The full documentation contains instructions for getting started, training new models and extending fairseq with new model types and tasks.

Pre-trained models and examples

We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below, as well as example training and evaluation commands.

We also have more detailed READMEs to reproduce results from specific papers:

Join the fairseq community

License

fairseq(-py) is MIT-licensed. The license applies to the pre-trained models as well.

Citation

Please cite as:

@inproceedings{ott2019fairseq,
  title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling},
  author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli},
  booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations},
  year = {2019},
}

About

No description, website, or topics provided.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages