Standardized Serverless ML Inference Platform on Kubernetes
-
Updated
Feb 4, 2025 - Python
Standardized Serverless ML Inference Platform on Kubernetes
pytorch实现Grad-CAM和Grad-CAM++,可以可视化任意分类网络的Class Activation Map (CAM)图,包括自定义的网络;同时也实现了目标检测faster r-cnn和retinanet两个网络的CAM图;欢迎试用、关注并反馈问题...
An open source DevOps tool for packaging and versioning AI/ML models, datasets, code, and configuration into an OCI artifact.
Pytorch Implementation of recent visual attribution methods for model interpretability
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
Overview of different model interpretability libraries.
Used the Functional API to built custom layers and non-sequential model types in TensorFlow, performed object detection, image segmentation, and interpretation of convolutions. Used generative deep learning including Auto Encoding, VAEs, and GANs to create new content.
Class Activation Map (CAM) Visualizations in PyTorch.
Code for "Investigating and Simplifying Masking-based Saliency Methods for Model Interpretability" (https://arxiv.org/abs/2010.09750)
Official repository for the paper "Instance-wise Causal Feature Selection for Model Interpretation" (CVPRW 2021)
Interpretability and Fairness in Machine Learning
This repository contains the work in the AI engineer Cognizant virtual training and internship program from forage
AI to Predict Yield in Aeroponics
Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs
A machine learning project developing classification models to predict COVID-19 diagnosis in paediatric patients.
An Explainable AI technique introduced in the paper Axiomatic Attribution for Deep Networks
A major gas and electricity utility that supplies to SME. The power-liberalization of the energy market in Europe has led to significant customer churn.Building a churn model to understand whether price sensitivity is the largest driver of churn.Verifying the hypothesis of price sensitivity being to some extent correlated with churn.
Using LIME and SHAP for model interpretability of Machine Learning Black-box models.
Covid Detection via CT Scan Image Analysis
Using machine learning models to predict if patients have chronic kidney disease based on a few features. The results of the models are also interpreted to make it more understandable to health practitioners.
Add a description, image, and links to the model-interpretability topic page so that developers can more easily learn about it.
To associate your repository with the model-interpretability topic, visit your repo's landing page and select "manage topics."