Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Recsys Metric : MRR #2843

Closed
wants to merge 1 commit into from
Closed

Recsys Metric : MRR #2843

wants to merge 1 commit into from

Conversation

shivance
Copy link

@shivance shivance commented Feb 3, 2023

Fixes #2631

Adding Mean Reciprocal Rank metric used in Recommendation Systems with tests that can directly be used
Reference

Check list:

  • New tests are added (if a new feature is added)
  • New doc strings: description and/or example code are in RST format
  • Documentation is updated (if required)

@github-actions github-actions bot added docs module: metrics Metrics module labels Feb 3, 2023
@shivance
Copy link
Author

shivance commented Feb 3, 2023

cc : @vfdev-5 codeformatting, and unit tests are yet pending
This is the initial PR, as I am new to ignite ecosystem, looking for initial feedback !

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Feb 3, 2023

@shivance have you tried to run this locally ? I also wonder about expected result (ref implementation). The link on https://github.com/zuoxingdong/recsys_metrics/blob/master/recsys_metrics/mean_reciprocal_rank.py is not that satisfatory to me. Can you make a quick research on MRR and find another package computing it (something like sklearn)

Also please read carefully this : https://pytorch.org/ignite/metrics.html#ignite-metrics
We prefer to perform online computation (by batches) => precompute and accumulate things and finally compute the metric value instead of storing all predictions and targets and compute the metric on compute.
Take a look how Accuracy metric is implemented in ignite as example:

self._num_correct = torch.tensor(0, device=self._device)
self._num_examples = 0

Main question is about if we can correctly accumulate self._relevance using batches vs if we were computing relevance over all data.

relevance = y.take_along_dim(topk_idx, dim=-1)
self._relevance = torch.cat([self._relevance, relevance], dim=-1)

@sync_all_reduce("_sum", "_num_examples")
Copy link
Collaborator

@vfdev-5 vfdev-5 Feb 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line is incorrect, but let's fix it later once we have a good understanding of how metric is computed

@shivance shivance closed this Mar 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Implementation of RecSys Metrics
2 participants