Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add TER (as implemented in sacrebleu) #3153

Merged
merged 6 commits into from
Nov 2, 2021
Merged

Conversation

BramVanroy
Copy link
Contributor

@BramVanroy BramVanroy commented Oct 23, 2021

Implements TER (Translation Edit Rate) as per its implementation in sacrebleu. Sacrebleu for BLEU scores is already implemented in datasets so I thought this would be a nice addition.

I started from the sacrebleu implementation, as the two metrics have a lot in common.

Verified with sacrebleu's testing suite that this indeed works as intended.

import datasets


test_cases = [
    (['aaaa bbbb cccc dddd'], ['aaaa bbbb cccc dddd'], 0),  # perfect match
    (['dddd eeee ffff'], ['aaaa bbbb cccc'], 1),  # no overlap
    ([''], ['a'], 1),  # corner case, empty hypothesis
    (['d e f g h a b c'], ['a b c d e f g h'], 1 / 8),  # a single shift fixes MT
    (
        [
            'wählen Sie " Bild neu berechnen , " um beim Ändern der Bildgröße Pixel hinzuzufügen oder zu entfernen , damit das Bild ungefähr dieselbe Größe aufweist wie die andere Größe .',
            'wenn Sie alle Aufgaben im aktuellen Dokument aktualisieren möchten , wählen Sie im Menü des Aufgabenbedienfelds die Option " Alle Aufgaben aktualisieren . "',
            'klicken Sie auf der Registerkarte " Optionen " auf die Schaltfläche " Benutzerdefiniert " und geben Sie Werte für " Fehlerkorrektur-Level " und " Y / X-Verhältnis " ein .',
            'Sie können beispielsweise ein Dokument erstellen , das ein Auto über die Bühne enthält .',
            'wählen Sie im Dialogfeld " Neu aus Vorlage " eine Vorlage aus und klicken Sie auf " Neu . "',
        ],
        [
            'wählen Sie " Bild neu berechnen , " um beim Ändern der Bildgröße Pixel hinzuzufügen oder zu entfernen , damit die Darstellung des Bildes in einer anderen Größe beibehalten wird .',
            'wenn Sie alle Aufgaben im aktuellen Dokument aktualisieren möchten , wählen Sie im Menü des Aufgabenbedienfelds die Option " Alle Aufgaben aktualisieren . "',
            'klicken Sie auf der Registerkarte " Optionen " auf die Schaltfläche " Benutzerdefiniert " und geben Sie für " Fehlerkorrektur-Level " und " Y / X-Verhältnis " niedrigere Werte ein .',
            'Sie können beispielsweise ein Dokument erstellen , das ein Auto enthalt , das sich über die Bühne bewegt .',
            'wählen Sie im Dialogfeld " Neu aus Vorlage " eine Vorlage aus und klicken Sie auf " Neu . "',
        ],
        0.136  # realistic example from WMT dev data (2019)
    ),
]

ter = datasets.load_metric(r"path\to\datasets\metrics\ter")

predictions = ["hello there general kenobi", "foo bar foobar"]
references = [["hello there general kenobi", "hello there !"], ["foo bar foobar", "foo bar foobar"]]
print(ter.compute(predictions=predictions, references=references))

for hyp, ref, score in test_cases:
    # Note the reference transformation which is different from scarebleu's input format
    results = ter.compute(predictions=hyp, references=[[r] for r in ref])
    assert 100*score == results["score"], f"expected {100*score}, got {results['score']}"

@slowwavesleep
Copy link
Contributor

slowwavesleep commented Oct 25, 2021

The problem appears to stem from the omission of the lines that you mentioned. If you add them back and try examples from this tutorial (sacrebleu metric example) the code you implemented works fine.

I think the purpose of these lines is follows:

  1. Sacrebleu metrics confusingly expect a nested list of strings when you have just one reference for each hypothesis (i.e. [["example1", "example2", "example3]]), while for cases with more than one reference a nested list of lists of strings (i.e. [["ref1a", "ref1b"], ["ref2a", "ref2b"], ["ref3a", "ref3b"]]) is expected instead. So transformed_references line outputs the required single reference format for sacrebleu's ter implementation which you can't pass directly to compute.
  2. I'm assuming that an additional check is also related to that confusing format with one/many references, because it's really difficult to tell what exactly you're doing wrong if you're not aware of that issue.

Copy link
Collaborator

@mariosasko mariosasko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, @BramVanroy! This already looks nice.

To fix the issues with style, you can run make style, or mingw32-make style if you are on Windows.

metrics/ter/ter.py Outdated Show resolved Hide resolved
metrics/ter/ter.py Outdated Show resolved Hide resolved
metrics/ter/ter.py Outdated Show resolved Hide resolved
@BramVanroy BramVanroy changed the title [help wanted] Add TER (as implemented in sacrebleu) Add TER (as implemented in sacrebleu) Oct 31, 2021
Copy link
Member

@lhoestq lhoestq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding this metric :)

It looks all good to me
I just added one line in the docstring for doctest

@lhoestq lhoestq merged commit 03afcab into huggingface:master Nov 2, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants