Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix/tutorials execution #18

Merged
merged 9 commits into from
Sep 19, 2023
1,581 changes: 819 additions & 762 deletions tutorials/BasicTutorial.ipynb

Large diffs are not rendered by default.

849 changes: 849 additions & 0 deletions tutorials/BasicTutorial_colab.ipynb

Large diffs are not rendered by default.

17 changes: 15 additions & 2 deletions tutorials/tutorial_clustering_tree_explainer.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,20 @@
"# Clustering Tree Explainer\n",
"\n",
"\n",
"When executing a clustering algorithm like K-Means usually the samples in our dataset are partitioned to K different clusters/groups. However, sometimes is difficult to understand why a sample is assigned to a particular cluster or what are the features that characterize a cluster. To improve interpretability, a small decision tree can be used to partition the data in the assigned clusters of a previously run clustering algorithm. In this tutorial, we show how we can build this decision tree using the `ClusteringTreeExplainer` class. This class builds a decision tree based on the [Iterative Mistake Minimization (IMM)](https://arxiv.org/pdf/2002.12538.pdf) method and [ExKMC: Expanding Explainable k-Means Clustering] (https://arxiv.org/pdf/2006.02399.pdf)\n"
"When executing a clustering algorithm like K-Means usually the samples in our dataset are partitioned to K different clusters/groups. However, sometimes is difficult to understand why a sample is assigned to a particular cluster or what are the features that characterize a cluster. To improve interpretability, a small decision tree can be used to partition the data in the assigned clusters of a previously run clustering algorithm. In this tutorial, we show how we can build this decision tree using the `ClusteringTreeExplainer` class. This class builds a decision tree based on the [Iterative Mistake Minimization (IMM)](https://arxiv.org/pdf/2002.12538.pdf) method and [ExKMC: Expanding Explainable k-Means Clustering](https://arxiv.org/pdf/2006.02399.pdf)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"To execute this tutorial, you need to install mercury-explainability and pyspark (if they are not already installed). You can install them by executing the next command in a cell:\n",
"\n",
"```\n",
"!pip install mercury-explainability pyspark\n",
"```"
]
},
{
Expand Down Expand Up @@ -1295,5 +1308,5 @@
}
},
"nbformat": 4,
"nbformat_minor": 1
"nbformat_minor": 4
}
1,114 changes: 687 additions & 427 deletions tutorials/tutorial_counterfactual_basic_explainer.ipynb

Large diffs are not rendered by default.

Loading