Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Amendmend to PR #7423 #7575

Merged
merged 4 commits into from
Dec 18, 2020
Merged

Conversation

hotzenklotz
Copy link
Contributor

@hotzenklotz hotzenklotz commented Dec 17, 2020

Proposed changes:
@dakshvar22 To my dismay I discovered that my refactoring reintroduced a bug in my original PR #7423. Unfortunately, it was not captured by any of the unit the tests. I came across it when running a new response selector evaluation today. This PR should fix it.

In short, the default value for a response selector key was wrong. It needs to default to None for all "regular" intent examples or otherwise those won't be filtered out further down the pipeline and crash the sklearn reports.

cc @tmbo

Status (please check what you already did):

  • added some tests for the functionality
  • updated the documentation
  • updated the changelog (please check changelog for instructions)
  • reformat files using black (please check Readme for instructions)

@dakshvar22
Copy link
Contributor

Thanks @hotzenklotz. Can you please also post what the output now looks like after running a fresh evaluation on the trained response selector model?

@hotzenklotz
Copy link
Contributor Author

It looks pretty much the same as before. Only now the response selector labels include their parents intent name as well, e.g. ask_faq/question_1.

image

@dakshvar22
Copy link
Contributor

Thanks! Can you please also add a unit test which would check if training examples with no response keys are correctly filtered during evaluation?

@hotzenklotz
Copy link
Contributor Author

hotzenklotz commented Dec 17, 2020

Can you please also add a unit test which would check if training examples with no response keys are correctly filtered during evaluation?

There is already a unit test for that: https://github.com/RasaHQ/rasa/blob/1.10.x/tests/nlu/test_evaluation.py#L582

@dakshvar22 dakshvar22 self-requested a review December 17, 2020 16:31
@dakshvar22
Copy link
Contributor

@hotzenklotz I just pushed a changelog to indicate this is a bugfix.

@dakshvar22 dakshvar22 merged commit e59b818 into RasaHQ:1.10.x Dec 18, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants