Self-explanatory tutorials for different model-agnostic and model-specific XAI methods
-
Updated
Sep 24, 2024 - Jupyter Notebook
Self-explanatory tutorials for different model-agnostic and model-specific XAI methods
counterfactuals: An R package for Counterfactual Explanation Methods
SLISEMAP: Combining supervised dimensionality reduction with local explanations
Local Universal Rule-based Explanations
We've developed a powerful binary dog and cat image classifier, driven by advanced deep learning techniques, and enhanced its transparency using Local Interpretable Model-agnostic Explanations (LIME). Witness the magic as the model accurately predicts dog and cat images while LIME reveals the intricate decision-making process behind each result.
A machine learning project developing classification models to predict COVID-19 diagnosis in paediatric patients.
Classifier Analysis and Fairness Considerations
A Global Model-Agnostic Rule-Based XAI Method based on Parameterised Event Primitives for Time Series Classifiers
Model-independent visual explanation methods for image classifiers.
Add a description, image, and links to the model-agnostic-explanations topic page so that developers can more easily learn about it.
To associate your repository with the model-agnostic-explanations topic, visit your repo's landing page and select "manage topics."