Skip to content

Improving Zero-shot Translation of Low-resource Languages [IWSLT 2017]

Notifications You must be signed in to change notification settings

surafelml/improving-zeroshot-nmt

Repository files navigation

Improving Zero-shot Translation

This repo implements the paper -- Improving Zero-shot Translation of Low-resource Languages.


Scenario

Before the experimental setup, if you are wondering what type of MT problem we are approaching, take into consideration the following scenario:

  • For languages X, Y, P, parallel training data is available only for X-P and Y-P pairs
  • This allows to train a multiligual model with four translation directions.
  • At time of inference, however, you can attempt to translate between the X-Y pair -- also known as Zero-Shot Translation (ZST).

Given the large majority of language pairs lack parallel data, ZST becomes a super exciting approach, especially if the translations are usable. However, it is mostly the case to get poor ZST outputs. For instance, mixed language on top of wrong translations.

What you are going to replicate below answers the question -- how to further improve over the naive zero-shot inference leveraging a baseline multiligual model. For further details on the approach see the paper.

Experimental Setup


Requirements

./setup-env.sh or see dependecies for each repo.

Data Preparation

For this experiment, we use the TED Talks data from Qi et al..

./scripts/get-ted-talks-data.sh

Pre-Training Baseline (Multilingual) Model

Following the scenario, (X, Y, P) languages, lets take Italian/X (it), Romanian/Y (ro), and English/P (en).

Preprocess

./scripts/preprocess.sh 'it ro'

We assume en as the target for the it, ro source. In total we process a 4 direction multilingual training data.

Pre-Training

./pretrain-baseline.sh

Train Zero-Shot Model

Before the ZST training, lets extract n-way parallel evaluation data (e.g. X-P-Y) from the X-P and Y-P pairs. This is important for evaluating the X<>Y ZST pair or the alternative pivoting translation X<>P<>Y.

./scripts/get-n-way-parallel-data.sh [zst-src-lang-id] [zst-tgt-lang-id] [pivot-lang-id]

Train ZST Model

./train-zst-model.sh [zst-src-lang-id] [zst-tgt-lang-id] [pre-trained-model-dir] [zst-training-rounds] [gpu-id]

Evaluation

Takes a preprocessed source file, translate and evaluates. For src-pivot-tgt pivot based evaluation, specify the pivot language id.

./translate_evaluate.sh [data-bin-dir] [src-input] [model] [gpu-id] [src-lang-id] [tgt-lang-id] [pivot-lang-id]


Reference

@article{lakew2018improving,
  title={Improving zero-shot translation of low-resource languages},
  author={Lakew, Surafel M and Lotito, Quintino F and Negri, Matteo and Turchi, Marco and Federico, Marcello},
  journal={arXiv preprint arXiv:1811.01389},
  year={2018}
}

@article{lakew2019multilingual,
  title={Multilingual Neural Machine Translation for Zero-Resource Languages},
  author={Lakew, Surafel M and Federico, Marcello and Negri, Matteo and Turchi, Marco},
  journal={arXiv preprint arXiv:1909.07342},
  year={2019}
}

About

Improving Zero-shot Translation of Low-resource Languages [IWSLT 2017]

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published