Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluator omits an output with the empty inputs #1032

Open
de9uch1 opened this issue Nov 25, 2024 · 0 comments
Open

Evaluator omits an output with the empty inputs #1032

de9uch1 opened this issue Nov 25, 2024 · 0 comments
Labels

Comments

@de9uch1
Copy link

de9uch1 commented Nov 25, 2024

Bug description

Evaluator class should return the N scores for N inputs, but it omits an output when an empty input is given.

How to reproduce

  1. Prepare the python environment
  2. pip install pymarian
  3. Run the following codes:
from huggingface_hub import hf_hub_download as hf_get
from pymarian import Evaluator

model_id = "marian-nmt/bleurt-20"
model = hf_get(model_id, filename="checkpoints/marian.model.bin")
vocab = hf_get(model_id, filename="vocab.spm")

evaluator = Evaluator.new(model_file=model, vocab_file=vocab, like="bleurt", cpu_threads=8)

for score in evaluator.evaluate(["\t"]):
    print(score)

It causes the assertion error:

Traceback (most recent call last):
  File "<stdin>", line 10, in <module>
  File "<path_to_pymarian>/pymarian/__init__.py", line 142, in evaluate
    assert len(scores) == len(batch)
AssertionError

In this example, the len(batch) will be 1, but len(scores) will be 0.

Context

  • Python version: Python 3.10.15
  • Marian version: v1.12.31 dd3571de
@de9uch1 de9uch1 added the bug label Nov 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant