Skip to content

Commit

Permalink
Merge pull request #14 from fidelity/diversity_metric
Browse files Browse the repository at this point in the history
Diversity metric _ inter-list diversity
  • Loading branch information
takojunior authored Jan 14, 2022
2 parents 8c753af + f2b7de2 commit 8061d4c
Show file tree
Hide file tree
Showing 30 changed files with 781 additions and 25 deletions.
6 changes: 6 additions & 0 deletions CHANGELOG.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,12 @@
CHANGELOG
=========

-------------------------------------------------------------------------------
Jan 07, 2022 1.3.1
-------------------------------------------------------------------------------

- Added Inter-List Diversity to recommender metrics.

-------------------------------------------------------------------------------
July 29, 2021 1.3.0
-------------------------------------------------------------------------------
Expand Down
5 changes: 4 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ Jurity is developed by the Artificial Intelligence Center of Excellence at Fidel
* [NDCG: Normalized discounted cumulative gain](https://fidelity.github.io/jurity/about_reco.html#ndcg-normalized-discounted-cumulative-gain)
* [Precision@K](https://fidelity.github.io/jurity/about_reco.html#precision)
* [Recall@K](https://fidelity.github.io/jurity/about_reco.html#recall)
* [Inter-List Diversity@K](https://fidelity.github.io/jurity/about_reco.html#inter-list-diversity)

## Classification Metrics
* [Accuracy](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html)
Expand Down Expand Up @@ -96,7 +97,7 @@ print("Fairness Metrics After:", BinaryFairnessMetrics().get_all_scores(labels,

```python
# Import recommenders metrics
from jurity.recommenders import BinaryRecoMetrics, RankingRecoMetrics
from jurity.recommenders import BinaryRecoMetrics, RankingRecoMetrics, DiversityRecoMetrics
import pandas as pd

# Data
Expand All @@ -112,6 +113,7 @@ map_k = RankingRecoMetrics.MAP(click_column="clicks", k=2)
ncdg_k = RankingRecoMetrics.NDCG(click_column="clicks", k=3)
precision_k = RankingRecoMetrics.Precision(click_column="clicks", k=2)
recall_k = RankingRecoMetrics.Recall(click_column="clicks", k=2)
interlist_diversity_k = DiversityRecoMetrics.InterListDiversity(click_column="clicks", k=2)

# Scores
print("AUC:", auc.get_score(actual, predicted))
Expand All @@ -122,6 +124,7 @@ print("MAP@K:", map_k.get_score(actual, predicted))
print("NCDG:", ncdg_k.get_score(actual, predicted))
print("Precision@K:", precision_k.get_score(actual, predicted))
print("Recall@K:", recall_k.get_score(actual, predicted))
print("Inter-List Diversity@K:", interlist_diversity_k.get_score(actual, predicted))
```

## Quick Start: Classification Evaluation
Expand Down
2 changes: 1 addition & 1 deletion docs/.buildinfo
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: b10c9597fde2ec852a21a6d3da16cfd8
config: a4efaf5788f180055a98697641fd0ae0
tags: 645f666f9bcd5a90fca523b33c5a78b7
17 changes: 17 additions & 0 deletions docs/_sources/about_reco.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -107,3 +107,20 @@ Improving the highest-ranked recommendations has a more important effect than im

.. math::
NDCG@k = \frac{1}{\left | A \right |} \sum_{i=1}^{\left | A \right |} \frac {\sum_{r=1}^{\left | P_i \right |} \frac{rel(P_{i,r})}{log_2(r+1)}}{\sum_{r=1}^{\left | A_i \right |} \frac{1}{log_2(r+1)}}
Diversity Recommender Metrics
-----------------------------
Diversity recommender metrics evaluate the quality of recommendations for different notions of diversity.

Inter-List Diversity
^^^^^^^^^^^^^^^^^^^^
Inter-List Diversity@k measures the inter-list diversity of the recommendations when only k recommendations are
made to the user. It measures how user's lists of recommendations are different from each other. This metric has a range
in :math:`[0, 1]`. The higher this metric is, the more diversified lists of items are recommended to different users.
Let :math:`U` denote the set of :math:`N` unique users, :math:`u_i`, :math:`u_j \in U` denote the i-th and j-th user in the
user set, :math:`i, j \in \{0,1,\cdots,N\}`. :math:`R_{u_i}` is the binary indicator vector representing provided
recommendations for :math:`u_i`. :math:`I` is the set of all unique user pairs, :math:`\forall~i<j, \{u_i, u_j\} \in I`.

.. math::
Inter\mbox{-}list~diversity = \frac{\sum_{i,j, \{u_i, u_j\} \in I}(cosine\_distance(R_{u_i}, R_{u_j}))}{|I|}
8 changes: 8 additions & 0 deletions docs/_sources/api.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -55,3 +55,11 @@ Ranking Recommenders Metrics
:members:
:undoc-members:
:show-inheritance:

Diversity Recommenders Metrics
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

.. autoclass:: jurity.recommenders.DiversityRecoMetrics
:members:
:undoc-members:
:show-inheritance:
1 change: 1 addition & 0 deletions docs/_sources/index.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ Available Recommenders Metrics
* `Recall@K <https://fidelity.github.io/jurity/about_reco.html#recall>`_
* `MAP@K: Mean Average Precision <https://fidelity.github.io/jurity/about_reco.html#map-mean-average-precision>`_
* `NDCG: Normalized discounted cumulative gain <https://fidelity.github.io/jurity/about_reco.html#ndcg-normalized-discounted-cumulative-gain>`_
* `Inter-List Diversity <https://fidelity.github.io/jurity/about_reco.html#inter-list-diversity>`_

Available Classification Metrics
---------------------------------
Expand Down
2 changes: 1 addition & 1 deletion docs/_static/documentation_options.js
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
var DOCUMENTATION_OPTIONS = {
URL_ROOT: document.getElementById("documentation_options").getAttribute('data-url_root'),
VERSION: '1.2.1',
VERSION: '1.3.1',
LANGUAGE: 'None',
COLLAPSE_INDEX: false,
BUILDER: 'html',
Expand Down
4 changes: 2 additions & 2 deletions docs/about_fairness.html
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
<meta charset="utf-8" /><meta name="generator" content="Docutils 0.17.1: http://docutils.sourceforge.net/" />

<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>About Algorithmic Fairness &mdash; Jurity 1.2.1 documentation</title>
<title>About Algorithmic Fairness &mdash; Jurity 1.3.1 documentation</title>
<link rel="stylesheet" href="_static/pygments.css" type="text/css" />
<link rel="stylesheet" href="_static/css/theme.css" type="text/css" />
<!--[if lt IE 9]>
Expand All @@ -15,7 +15,7 @@
<script src="_static/jquery.js"></script>
<script src="_static/underscore.js"></script>
<script src="_static/doctools.js"></script>
<script async="async" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<script defer="defer" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<script src="_static/js/theme.js"></script>
<link rel="index" title="Index" href="genindex.html" />
<link rel="search" title="Search" href="search.html" />
Expand Down
23 changes: 21 additions & 2 deletions docs/about_reco.html
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
<meta charset="utf-8" /><meta name="generator" content="Docutils 0.17.1: http://docutils.sourceforge.net/" />

<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>About Recommenders Metrics &mdash; Jurity 1.2.1 documentation</title>
<title>About Recommenders Metrics &mdash; Jurity 1.3.1 documentation</title>
<link rel="stylesheet" href="_static/pygments.css" type="text/css" />
<link rel="stylesheet" href="_static/css/theme.css" type="text/css" />
<!--[if lt IE 9]>
Expand All @@ -15,7 +15,7 @@
<script src="_static/jquery.js"></script>
<script src="_static/underscore.js"></script>
<script src="_static/doctools.js"></script>
<script async="async" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<script defer="defer" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<script src="_static/js/theme.js"></script>
<link rel="index" title="Index" href="genindex.html" />
<link rel="search" title="Search" href="search.html" />
Expand Down Expand Up @@ -55,6 +55,10 @@
<li class="toctree-l3"><a class="reference internal" href="#ndcg-normalized-discounted-cumulative-gain">NDCG: Normalized Discounted Cumulative Gain</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#diversity-recommender-metrics">Diversity Recommender Metrics</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#inter-list-diversity">Inter-List Diversity</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="api.html">Jurity Public API</a></li>
Expand Down Expand Up @@ -167,6 +171,21 @@ <h3>NDCG: Normalized Discounted Cumulative Gain<a class="headerlink" href="#ndcg
\[NDCG&#64;k = \frac{1}{\left | A \right |} \sum_{i=1}^{\left | A \right |} \frac {\sum_{r=1}^{\left | P_i \right |} \frac{rel(P_{i,r})}{log_2(r+1)}}{\sum_{r=1}^{\left | A_i \right |} \frac{1}{log_2(r+1)}}\]</div>
</section>
</section>
<section id="diversity-recommender-metrics">
<h2>Diversity Recommender Metrics<a class="headerlink" href="#diversity-recommender-metrics" title="Permalink to this headline"></a></h2>
<p>Diversity recommender metrics evaluate the quality of recommendations for different notions of diversity.</p>
<section id="inter-list-diversity">
<h3>Inter-List Diversity<a class="headerlink" href="#inter-list-diversity" title="Permalink to this headline"></a></h3>
<p>Inter-List Diversity&#64;k measures the inter-list diversity of the recommendations when only k recommendations are
made to the user. It measures how user’s lists of recommendations are different from each other. This metric has a range
in <span class="math notranslate nohighlight">\([0, 1]\)</span>. The higher this metric is, the more diversified lists of items are recommended to different users.
Let <span class="math notranslate nohighlight">\(U\)</span> denote the set of <span class="math notranslate nohighlight">\(N\)</span> unique users, <span class="math notranslate nohighlight">\(u_i\)</span>, <span class="math notranslate nohighlight">\(u_j \in U\)</span> denote the i-th and j-th user in the
user set, <span class="math notranslate nohighlight">\(i, j \in \{0,1,\cdots,N\}\)</span>. <span class="math notranslate nohighlight">\(R_{u_i}\)</span> is the binary indicator vector representing provided
recommendations for <span class="math notranslate nohighlight">\(u_i\)</span>. <span class="math notranslate nohighlight">\(I\)</span> is the set of all unique user pairs, <span class="math notranslate nohighlight">\(\forall~i&lt;j, \{u_i, u_j\} \in I\)</span>.</p>
<div class="math notranslate nohighlight">
\[Inter\mbox{-}list~diversity = \frac{\sum_{i,j, \{u_i, u_j\} \in I}(cosine\_distance(R_{u_i}, R_{u_j}))}{|I|}\]</div>
</section>
</section>
</section>


Expand Down
Loading

0 comments on commit 8061d4c

Please sign in to comment.